Jun 20 18:22:40.008126 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jun 20 18:22:40.008144 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jun 20 16:58:52 -00 2025 Jun 20 18:22:40.008150 kernel: KASLR enabled Jun 20 18:22:40.008154 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 20 18:22:40.008159 kernel: printk: legacy bootconsole [pl11] enabled Jun 20 18:22:40.008163 kernel: efi: EFI v2.7 by EDK II Jun 20 18:22:40.008168 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20d018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Jun 20 18:22:40.008172 kernel: random: crng init done Jun 20 18:22:40.008175 kernel: secureboot: Secure boot disabled Jun 20 18:22:40.008179 kernel: ACPI: Early table checksum verification disabled Jun 20 18:22:40.008183 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jun 20 18:22:40.008187 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008191 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008195 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 18:22:40.008200 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008205 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008209 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008214 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008218 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008222 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008226 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 20 18:22:40.008230 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:22:40.008234 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 20 18:22:40.008238 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 20 18:22:40.008242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 20 18:22:40.008246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jun 20 18:22:40.008250 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jun 20 18:22:40.008255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 20 18:22:40.008259 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 20 18:22:40.008264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 20 18:22:40.008268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 20 18:22:40.008272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 20 18:22:40.008276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 20 18:22:40.008280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 20 18:22:40.008284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 20 18:22:40.008288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 20 18:22:40.008292 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jun 20 18:22:40.008296 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jun 20 18:22:40.008300 kernel: Zone ranges: Jun 20 18:22:40.008304 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 20 18:22:40.008311 kernel: DMA32 empty Jun 20 18:22:40.008315 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:22:40.008320 kernel: Device empty Jun 20 18:22:40.008324 kernel: Movable zone start for each node Jun 20 18:22:40.008328 kernel: Early memory node ranges Jun 20 18:22:40.008333 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 20 18:22:40.008338 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jun 20 18:22:40.008342 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jun 20 18:22:40.008346 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jun 20 18:22:40.008350 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jun 20 18:22:40.008354 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jun 20 18:22:40.008359 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jun 20 18:22:40.008363 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jun 20 18:22:40.008367 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:22:40.008371 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 20 18:22:40.008376 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 20 18:22:40.008380 kernel: psci: probing for conduit method from ACPI. Jun 20 18:22:40.008385 kernel: psci: PSCIv1.1 detected in firmware. Jun 20 18:22:40.008400 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 18:22:40.008405 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 20 18:22:40.008409 kernel: psci: SMC Calling Convention v1.4 Jun 20 18:22:40.008413 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 20 18:22:40.008418 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 20 18:22:40.008422 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 20 18:22:40.008426 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 20 18:22:40.008431 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 18:22:40.008435 kernel: Detected PIPT I-cache on CPU0 Jun 20 18:22:40.008439 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jun 20 18:22:40.008445 kernel: CPU features: detected: GIC system register CPU interface Jun 20 18:22:40.008449 kernel: CPU features: detected: Spectre-v4 Jun 20 18:22:40.008453 kernel: CPU features: detected: Spectre-BHB Jun 20 18:22:40.008458 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 20 18:22:40.008462 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 20 18:22:40.008466 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jun 20 18:22:40.008471 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 20 18:22:40.008475 kernel: alternatives: applying boot alternatives Jun 20 18:22:40.008480 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:22:40.008485 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:22:40.008489 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:22:40.008494 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:22:40.008499 kernel: Fallback order for Node 0: 0 Jun 20 18:22:40.008503 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jun 20 18:22:40.008507 kernel: Policy zone: Normal Jun 20 18:22:40.008512 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:22:40.008516 kernel: software IO TLB: area num 2. Jun 20 18:22:40.008520 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jun 20 18:22:40.008524 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:22:40.008529 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:22:40.008534 kernel: rcu: RCU event tracing is enabled. Jun 20 18:22:40.008538 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:22:40.008543 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:22:40.008548 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:22:40.008552 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:22:40.008556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:22:40.008561 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:22:40.008565 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:22:40.008569 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 18:22:40.008573 kernel: GICv3: 960 SPIs implemented Jun 20 18:22:40.008578 kernel: GICv3: 0 Extended SPIs implemented Jun 20 18:22:40.008582 kernel: Root IRQ handler: gic_handle_irq Jun 20 18:22:40.008586 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jun 20 18:22:40.008591 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jun 20 18:22:40.008596 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 20 18:22:40.008600 kernel: ITS: No ITS available, not enabling LPIs Jun 20 18:22:40.008604 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:22:40.008609 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jun 20 18:22:40.008613 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 18:22:40.008618 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jun 20 18:22:40.008622 kernel: Console: colour dummy device 80x25 Jun 20 18:22:40.008627 kernel: printk: legacy console [tty1] enabled Jun 20 18:22:40.008631 kernel: ACPI: Core revision 20240827 Jun 20 18:22:40.008636 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jun 20 18:22:40.008641 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:22:40.008646 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 18:22:40.008650 kernel: landlock: Up and running. Jun 20 18:22:40.008655 kernel: SELinux: Initializing. Jun 20 18:22:40.008659 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:22:40.008664 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:22:40.008672 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jun 20 18:22:40.008677 kernel: Hyper-V: Host Build 10.0.26100.1255-1-0 Jun 20 18:22:40.008682 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 18:22:40.008687 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:22:40.008691 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:22:40.008696 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 18:22:40.008702 kernel: Remapping and enabling EFI services. Jun 20 18:22:40.008706 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:22:40.008711 kernel: Detected PIPT I-cache on CPU1 Jun 20 18:22:40.008716 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 20 18:22:40.008720 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jun 20 18:22:40.008726 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:22:40.008731 kernel: SMP: Total of 2 processors activated. Jun 20 18:22:40.008735 kernel: CPU: All CPU(s) started at EL1 Jun 20 18:22:40.008740 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 18:22:40.008745 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 20 18:22:40.008749 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 20 18:22:40.008754 kernel: CPU features: detected: Common not Private translations Jun 20 18:22:40.008759 kernel: CPU features: detected: CRC32 instructions Jun 20 18:22:40.008763 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jun 20 18:22:40.008769 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 20 18:22:40.008774 kernel: CPU features: detected: LSE atomic instructions Jun 20 18:22:40.008778 kernel: CPU features: detected: Privileged Access Never Jun 20 18:22:40.008783 kernel: CPU features: detected: Speculation barrier (SB) Jun 20 18:22:40.008788 kernel: CPU features: detected: TLB range maintenance instructions Jun 20 18:22:40.008792 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 20 18:22:40.008797 kernel: CPU features: detected: Scalable Vector Extension Jun 20 18:22:40.008802 kernel: alternatives: applying system-wide alternatives Jun 20 18:22:40.008807 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jun 20 18:22:40.008812 kernel: SVE: maximum available vector length 16 bytes per vector Jun 20 18:22:40.008817 kernel: SVE: default vector length 16 bytes per vector Jun 20 18:22:40.008822 kernel: Memory: 3976112K/4194160K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 213432K reserved, 0K cma-reserved) Jun 20 18:22:40.008826 kernel: devtmpfs: initialized Jun 20 18:22:40.008831 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:22:40.008836 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:22:40.008841 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 20 18:22:40.008845 kernel: 0 pages in range for non-PLT usage Jun 20 18:22:40.008850 kernel: 508544 pages in range for PLT usage Jun 20 18:22:40.008855 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:22:40.008860 kernel: SMBIOS 3.1.0 present. Jun 20 18:22:40.008865 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jun 20 18:22:40.008870 kernel: DMI: Memory slots populated: 2/2 Jun 20 18:22:40.008874 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:22:40.008879 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 18:22:40.008884 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 18:22:40.008889 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 18:22:40.008893 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:22:40.008899 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jun 20 18:22:40.008904 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:22:40.008908 kernel: cpuidle: using governor menu Jun 20 18:22:40.008913 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 18:22:40.008918 kernel: ASID allocator initialised with 32768 entries Jun 20 18:22:40.008923 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:22:40.008927 kernel: Serial: AMBA PL011 UART driver Jun 20 18:22:40.008932 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:22:40.008937 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:22:40.008942 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 18:22:40.008947 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 18:22:40.008951 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:22:40.008956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:22:40.008961 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 18:22:40.008966 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 18:22:40.008970 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:22:40.008975 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:22:40.008980 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:22:40.008985 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:22:40.008990 kernel: ACPI: Interpreter enabled Jun 20 18:22:40.008994 kernel: ACPI: Using GIC for interrupt routing Jun 20 18:22:40.008999 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 20 18:22:40.009004 kernel: printk: legacy console [ttyAMA0] enabled Jun 20 18:22:40.009008 kernel: printk: legacy bootconsole [pl11] disabled Jun 20 18:22:40.009013 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 20 18:22:40.009018 kernel: ACPI: CPU0 has been hot-added Jun 20 18:22:40.009022 kernel: ACPI: CPU1 has been hot-added Jun 20 18:22:40.009028 kernel: iommu: Default domain type: Translated Jun 20 18:22:40.009033 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 18:22:40.009037 kernel: efivars: Registered efivars operations Jun 20 18:22:40.009042 kernel: vgaarb: loaded Jun 20 18:22:40.009047 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 18:22:40.009051 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:22:40.009056 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:22:40.009061 kernel: pnp: PnP ACPI init Jun 20 18:22:40.009065 kernel: pnp: PnP ACPI: found 0 devices Jun 20 18:22:40.009071 kernel: NET: Registered PF_INET protocol family Jun 20 18:22:40.009075 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:22:40.009080 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 18:22:40.009085 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:22:40.009090 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:22:40.009095 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 18:22:40.009099 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 18:22:40.009104 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:22:40.009109 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:22:40.009114 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:22:40.009119 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:22:40.009123 kernel: kvm [1]: HYP mode not available Jun 20 18:22:40.009128 kernel: Initialise system trusted keyrings Jun 20 18:22:40.009133 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 18:22:40.009137 kernel: Key type asymmetric registered Jun 20 18:22:40.009142 kernel: Asymmetric key parser 'x509' registered Jun 20 18:22:40.009147 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 20 18:22:40.009151 kernel: io scheduler mq-deadline registered Jun 20 18:22:40.009157 kernel: io scheduler kyber registered Jun 20 18:22:40.009161 kernel: io scheduler bfq registered Jun 20 18:22:40.009166 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:22:40.009171 kernel: thunder_xcv, ver 1.0 Jun 20 18:22:40.009175 kernel: thunder_bgx, ver 1.0 Jun 20 18:22:40.009180 kernel: nicpf, ver 1.0 Jun 20 18:22:40.009185 kernel: nicvf, ver 1.0 Jun 20 18:22:40.009290 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 18:22:40.009342 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T18:22:39 UTC (1750443759) Jun 20 18:22:40.009349 kernel: efifb: probing for efifb Jun 20 18:22:40.009353 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 18:22:40.009358 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 18:22:40.009363 kernel: efifb: scrolling: redraw Jun 20 18:22:40.009368 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:22:40.009373 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:22:40.009377 kernel: fb0: EFI VGA frame buffer device Jun 20 18:22:40.009382 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 20 18:22:40.009394 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:22:40.009400 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jun 20 18:22:40.009404 kernel: watchdog: NMI not fully supported Jun 20 18:22:40.009409 kernel: watchdog: Hard watchdog permanently disabled Jun 20 18:22:40.009414 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:22:40.009418 kernel: Segment Routing with IPv6 Jun 20 18:22:40.009423 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:22:40.009428 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:22:40.009432 kernel: Key type dns_resolver registered Jun 20 18:22:40.009438 kernel: registered taskstats version 1 Jun 20 18:22:40.009443 kernel: Loading compiled-in X.509 certificates Jun 20 18:22:40.009448 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 4dab98fc4de70d482d00f54d1877f6231fc25377' Jun 20 18:22:40.009452 kernel: Demotion targets for Node 0: null Jun 20 18:22:40.009457 kernel: Key type .fscrypt registered Jun 20 18:22:40.009462 kernel: Key type fscrypt-provisioning registered Jun 20 18:22:40.009466 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:22:40.009471 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:22:40.009476 kernel: ima: No architecture policies found Jun 20 18:22:40.009481 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 18:22:40.009486 kernel: clk: Disabling unused clocks Jun 20 18:22:40.009490 kernel: PM: genpd: Disabling unused power domains Jun 20 18:22:40.009495 kernel: Warning: unable to open an initial console. Jun 20 18:22:40.009500 kernel: Freeing unused kernel memory: 39424K Jun 20 18:22:40.009504 kernel: Run /init as init process Jun 20 18:22:40.009509 kernel: with arguments: Jun 20 18:22:40.009514 kernel: /init Jun 20 18:22:40.009518 kernel: with environment: Jun 20 18:22:40.009523 kernel: HOME=/ Jun 20 18:22:40.009528 kernel: TERM=linux Jun 20 18:22:40.009533 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:22:40.009538 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:22:40.009545 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:22:40.009551 systemd[1]: Detected virtualization microsoft. Jun 20 18:22:40.009556 systemd[1]: Detected architecture arm64. Jun 20 18:22:40.009561 systemd[1]: Running in initrd. Jun 20 18:22:40.009566 systemd[1]: No hostname configured, using default hostname. Jun 20 18:22:40.009572 systemd[1]: Hostname set to . Jun 20 18:22:40.009577 systemd[1]: Initializing machine ID from random generator. Jun 20 18:22:40.009582 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:22:40.009587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:22:40.009592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:22:40.009598 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:22:40.009604 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:22:40.009609 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:22:40.009615 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:22:40.009620 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:22:40.009626 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:22:40.009631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:22:40.009636 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:22:40.009642 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:22:40.009648 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:22:40.009653 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:22:40.009658 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:22:40.009663 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:22:40.009668 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:22:40.009673 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:22:40.009678 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:22:40.009683 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:22:40.009689 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:22:40.009694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:22:40.009699 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:22:40.009705 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:22:40.009710 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:22:40.009715 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:22:40.009720 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 18:22:40.009725 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:22:40.009731 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:22:40.009736 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:22:40.009751 systemd-journald[225]: Collecting audit messages is disabled. Jun 20 18:22:40.009769 systemd-journald[225]: Journal started Jun 20 18:22:40.009783 systemd-journald[225]: Runtime Journal (/run/log/journal/b27cea061420468da6b74aaf34ea240d) is 8M, max 78.5M, 70.5M free. Jun 20 18:22:40.023149 systemd-modules-load[227]: Inserted module 'overlay' Jun 20 18:22:40.036628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:22:40.036647 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:22:40.048437 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:22:40.049148 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:22:40.060675 kernel: Bridge firewalling registered Jun 20 18:22:40.056586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:22:40.060079 systemd-modules-load[227]: Inserted module 'br_netfilter' Jun 20 18:22:40.072531 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:22:40.078023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:22:40.084781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:22:40.095223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:22:40.105058 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:22:40.125446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:22:40.133362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:22:40.150609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:22:40.162543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:22:40.169848 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:22:40.178807 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 18:22:40.183488 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:22:40.194011 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:22:40.210822 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:22:40.222701 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:22:40.242710 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:22:40.248644 systemd-resolved[264]: Positive Trust Anchors: Jun 20 18:22:40.248652 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:22:40.248672 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:22:40.250301 systemd-resolved[264]: Defaulting to hostname 'linux'. Jun 20 18:22:40.255725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:22:40.260031 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:22:40.305770 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:22:40.387440 kernel: SCSI subsystem initialized Jun 20 18:22:40.393416 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:22:40.401417 kernel: iscsi: registered transport (tcp) Jun 20 18:22:40.413458 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:22:40.413470 kernel: QLogic iSCSI HBA Driver Jun 20 18:22:40.427014 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:22:40.444759 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:22:40.450614 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:22:40.495609 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:22:40.500918 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:22:40.561408 kernel: raid6: neonx8 gen() 18525 MB/s Jun 20 18:22:40.578397 kernel: raid6: neonx4 gen() 18565 MB/s Jun 20 18:22:40.597395 kernel: raid6: neonx2 gen() 17081 MB/s Jun 20 18:22:40.617479 kernel: raid6: neonx1 gen() 15035 MB/s Jun 20 18:22:40.636396 kernel: raid6: int64x8 gen() 10532 MB/s Jun 20 18:22:40.655395 kernel: raid6: int64x4 gen() 10618 MB/s Jun 20 18:22:40.674480 kernel: raid6: int64x2 gen() 8980 MB/s Jun 20 18:22:40.695788 kernel: raid6: int64x1 gen() 7022 MB/s Jun 20 18:22:40.695831 kernel: raid6: using algorithm neonx4 gen() 18565 MB/s Jun 20 18:22:40.717457 kernel: raid6: .... xor() 15136 MB/s, rmw enabled Jun 20 18:22:40.717496 kernel: raid6: using neon recovery algorithm Jun 20 18:22:40.726292 kernel: xor: measuring software checksum speed Jun 20 18:22:40.726300 kernel: 8regs : 28468 MB/sec Jun 20 18:22:40.728724 kernel: 32regs : 28769 MB/sec Jun 20 18:22:40.731024 kernel: arm64_neon : 37432 MB/sec Jun 20 18:22:40.733793 kernel: xor: using function: arm64_neon (37432 MB/sec) Jun 20 18:22:40.772412 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:22:40.779424 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:22:40.787949 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:22:40.810304 systemd-udevd[474]: Using default interface naming scheme 'v255'. Jun 20 18:22:40.817163 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:22:40.829382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:22:40.858444 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Jun 20 18:22:40.876622 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:22:40.882429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:22:40.929362 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:22:40.936030 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:22:41.004573 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 18:22:41.004490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:22:41.004584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:22:41.016274 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:22:41.056984 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 18:22:41.057003 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 18:22:41.057010 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 18:22:41.057023 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 18:22:41.057029 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 18:22:41.030078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:22:41.073565 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 18:22:41.073583 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 18:22:41.073596 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 18:22:41.079110 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 18:22:41.079230 kernel: scsi host0: storvsc_host_t Jun 20 18:22:41.083602 kernel: scsi host1: storvsc_host_t Jun 20 18:22:41.082065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:22:41.101180 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 20 18:22:41.101234 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 18:22:41.082170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:22:41.101353 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:22:41.103520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:22:41.125722 kernel: hv_netvsc 002248bb-ff32-0022-48bb-ff32002248bb eth0: VF slot 1 added Jun 20 18:22:41.130410 kernel: PTP clock support registered Jun 20 18:22:41.142511 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 18:22:41.142541 kernel: hv_vmbus: registering driver hv_pci Jun 20 18:22:41.142549 kernel: hv_vmbus: registering driver hv_utils Jun 20 18:22:41.143406 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 18:22:41.148161 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 18:22:41.151427 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 18:22:40.676958 systemd-resolved[264]: Clock change detected. Flushing caches. Jun 20 18:22:40.686547 systemd-journald[225]: Time jumped backwards, rotating. Jun 20 18:22:40.686095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:22:40.711063 kernel: hv_pci e46c0e64-f33e-4427-81e6-c25049e3e073: PCI VMBus probing: Using version 0x10004 Jun 20 18:22:40.711212 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 20 18:22:40.711295 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 18:22:40.711361 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 20 18:22:40.711420 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:22:40.711478 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:22:40.713855 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 20 18:22:40.723271 kernel: hv_pci e46c0e64-f33e-4427-81e6-c25049e3e073: PCI host bridge to bus f33e:00 Jun 20 18:22:40.723415 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 20 18:22:40.723486 kernel: pci_bus f33e:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 20 18:22:40.728168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#21 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:22:40.738854 kernel: pci_bus f33e:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 18:22:40.745014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#34 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:22:40.745155 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 18:22:40.746034 kernel: pci f33e:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jun 20 18:22:40.755052 kernel: pci f33e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 20 18:22:40.758011 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:22:40.758029 kernel: pci f33e:00:02.0: enabling Extended Tags Jun 20 18:22:40.765705 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:22:40.777084 kernel: pci f33e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f33e:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jun 20 18:22:40.788436 kernel: pci_bus f33e:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 18:22:40.788560 kernel: pci f33e:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jun 20 18:22:40.800026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#235 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:22:40.822018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#192 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:22:40.858634 kernel: mlx5_core f33e:00:02.0: enabling device (0000 -> 0002) Jun 20 18:22:40.867088 kernel: mlx5_core f33e:00:02.0: PTM is not supported by PCIe Jun 20 18:22:40.867208 kernel: mlx5_core f33e:00:02.0: firmware version: 16.30.5006 Jun 20 18:22:41.032521 kernel: hv_netvsc 002248bb-ff32-0022-48bb-ff32002248bb eth0: VF registering: eth1 Jun 20 18:22:41.032733 kernel: mlx5_core f33e:00:02.0 eth1: joined to eth0 Jun 20 18:22:41.040028 kernel: mlx5_core f33e:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 20 18:22:41.048015 kernel: mlx5_core f33e:00:02.0 enP62270s1: renamed from eth1 Jun 20 18:22:41.510879 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 20 18:22:41.568829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:22:41.590167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 20 18:22:41.599871 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 20 18:22:41.605408 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:22:41.634173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 20 18:22:41.643276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:22:41.650050 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:22:41.657016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#228 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:22:41.666113 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:22:41.801102 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:22:41.824128 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:22:41.829784 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:22:41.839686 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:22:41.849695 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:22:41.876258 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:22:42.674021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#198 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:22:42.685866 disk-uuid[653]: The operation has completed successfully. Jun 20 18:22:42.689967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:22:42.747138 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:22:42.747234 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:22:42.771076 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:22:42.792075 sh[824]: Success Jun 20 18:22:42.828495 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:22:42.828547 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:22:42.833082 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 18:22:42.842033 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 20 18:22:43.047972 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:22:43.055729 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:22:43.072703 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:22:43.096834 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 18:22:43.096891 kernel: BTRFS: device fsid eac9c4a0-5098-4f12-a7ad-af09956ff0e3 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (842) Jun 20 18:22:43.105514 kernel: BTRFS info (device dm-0): first mount of filesystem eac9c4a0-5098-4f12-a7ad-af09956ff0e3 Jun 20 18:22:43.105543 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:22:43.108389 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 18:22:43.399812 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:22:43.403800 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:22:43.411014 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:22:43.411670 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:22:43.436693 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:22:43.465294 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (866) Jun 20 18:22:43.465334 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:22:43.469841 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:22:43.473338 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:22:43.513074 kernel: BTRFS info (device sda6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:22:43.513851 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:22:43.519062 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:22:43.555595 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:22:43.565831 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:22:43.597391 systemd-networkd[1011]: lo: Link UP Jun 20 18:22:43.597399 systemd-networkd[1011]: lo: Gained carrier Jun 20 18:22:43.598640 systemd-networkd[1011]: Enumeration completed Jun 20 18:22:43.599061 systemd-networkd[1011]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:22:43.599064 systemd-networkd[1011]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:22:43.600094 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:22:43.604595 systemd[1]: Reached target network.target - Network. Jun 20 18:22:43.669020 kernel: mlx5_core f33e:00:02.0 enP62270s1: Link up Jun 20 18:22:43.702024 kernel: hv_netvsc 002248bb-ff32-0022-48bb-ff32002248bb eth0: Data path switched to VF: enP62270s1 Jun 20 18:22:43.702047 systemd-networkd[1011]: enP62270s1: Link UP Jun 20 18:22:43.702100 systemd-networkd[1011]: eth0: Link UP Jun 20 18:22:43.702238 systemd-networkd[1011]: eth0: Gained carrier Jun 20 18:22:43.702248 systemd-networkd[1011]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:22:43.711190 systemd-networkd[1011]: enP62270s1: Gained carrier Jun 20 18:22:43.728043 systemd-networkd[1011]: eth0: DHCPv4 address 10.200.20.48/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:22:44.677435 ignition[976]: Ignition 2.21.0 Jun 20 18:22:44.677449 ignition[976]: Stage: fetch-offline Jun 20 18:22:44.681790 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:22:44.677520 ignition[976]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:22:44.689012 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:22:44.677526 ignition[976]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:22:44.677616 ignition[976]: parsed url from cmdline: "" Jun 20 18:22:44.677619 ignition[976]: no config URL provided Jun 20 18:22:44.677622 ignition[976]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:22:44.677626 ignition[976]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:22:44.677630 ignition[976]: failed to fetch config: resource requires networking Jun 20 18:22:44.677748 ignition[976]: Ignition finished successfully Jun 20 18:22:44.720670 ignition[1022]: Ignition 2.21.0 Jun 20 18:22:44.720675 ignition[1022]: Stage: fetch Jun 20 18:22:44.720820 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:22:44.720827 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:22:44.720894 ignition[1022]: parsed url from cmdline: "" Jun 20 18:22:44.720896 ignition[1022]: no config URL provided Jun 20 18:22:44.720899 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:22:44.720904 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:22:44.720932 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 18:22:44.805894 ignition[1022]: GET result: OK Jun 20 18:22:44.805947 ignition[1022]: config has been read from IMDS userdata Jun 20 18:22:44.805967 ignition[1022]: parsing config with SHA512: df67b7468321fe923e2e9a5e105cfa22e4463f9e6d035ec1d3ff36ac8fe28c7daa44cf8bf0975cadd2e21b286f2900169300e53b02e0b3bc2fe038aa95a4930f Jun 20 18:22:44.811870 unknown[1022]: fetched base config from "system" Jun 20 18:22:44.812152 ignition[1022]: fetch: fetch complete Jun 20 18:22:44.811875 unknown[1022]: fetched base config from "system" Jun 20 18:22:44.812160 ignition[1022]: fetch: fetch passed Jun 20 18:22:44.811879 unknown[1022]: fetched user config from "azure" Jun 20 18:22:44.812201 ignition[1022]: Ignition finished successfully Jun 20 18:22:44.816326 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:22:44.823710 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:22:44.856945 ignition[1028]: Ignition 2.21.0 Jun 20 18:22:44.858029 ignition[1028]: Stage: kargs Jun 20 18:22:44.858211 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:22:44.865601 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:22:44.858219 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:22:44.858918 ignition[1028]: kargs: kargs passed Jun 20 18:22:44.873741 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:22:44.858965 ignition[1028]: Ignition finished successfully Jun 20 18:22:44.896930 ignition[1035]: Ignition 2.21.0 Jun 20 18:22:44.896945 ignition[1035]: Stage: disks Jun 20 18:22:44.900543 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:22:44.897154 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:22:44.902415 systemd-networkd[1011]: enP62270s1: Gained IPv6LL Jun 20 18:22:44.897163 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:22:44.906947 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:22:44.898272 ignition[1035]: disks: disks passed Jun 20 18:22:44.914967 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:22:44.898317 ignition[1035]: Ignition finished successfully Jun 20 18:22:44.923005 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:22:44.929913 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:22:44.938396 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:22:44.946846 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:22:45.026518 systemd-fsck[1044]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 18:22:45.033771 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:22:45.039611 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:22:45.239021 kernel: EXT4-fs (sda9): mounted filesystem 40d60ae8-3eda-4465-8dd7-9dbfcfd71664 r/w with ordered data mode. Quota mode: none. Jun 20 18:22:45.239407 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:22:45.243233 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:22:45.276876 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:22:45.283755 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:22:45.295580 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:22:45.309514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:22:45.319228 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1058) Jun 20 18:22:45.309574 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:22:45.340952 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:22:45.340975 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:22:45.340990 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:22:45.329898 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:22:45.349131 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:22:45.356893 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:22:45.540111 systemd-networkd[1011]: eth0: Gained IPv6LL Jun 20 18:22:46.061697 coreos-metadata[1060]: Jun 20 18:22:46.061 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:22:46.069094 coreos-metadata[1060]: Jun 20 18:22:46.069 INFO Fetch successful Jun 20 18:22:46.072788 coreos-metadata[1060]: Jun 20 18:22:46.072 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:22:46.080891 coreos-metadata[1060]: Jun 20 18:22:46.080 INFO Fetch successful Jun 20 18:22:46.097930 coreos-metadata[1060]: Jun 20 18:22:46.097 INFO wrote hostname ci-4344.1.0-a-a1e4bb5c79 to /sysroot/etc/hostname Jun 20 18:22:46.104669 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:22:46.478410 initrd-setup-root[1088]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:22:46.484400 initrd-setup-root[1095]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:22:46.489773 initrd-setup-root[1102]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:22:46.512487 initrd-setup-root[1109]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:22:47.627670 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:22:47.633333 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:22:47.653375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:22:47.663764 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:22:47.672526 kernel: BTRFS info (device sda6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:22:47.689801 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:22:47.700249 ignition[1176]: INFO : Ignition 2.21.0 Jun 20 18:22:47.700249 ignition[1176]: INFO : Stage: mount Jun 20 18:22:47.708059 ignition[1176]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:22:47.708059 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:22:47.708059 ignition[1176]: INFO : mount: mount passed Jun 20 18:22:47.708059 ignition[1176]: INFO : Ignition finished successfully Jun 20 18:22:47.706558 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:22:47.711882 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:22:47.735105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:22:47.766428 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1188) Jun 20 18:22:47.766484 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:22:47.770590 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:22:47.773560 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:22:47.775805 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:22:47.801558 ignition[1206]: INFO : Ignition 2.21.0 Jun 20 18:22:47.801558 ignition[1206]: INFO : Stage: files Jun 20 18:22:47.810135 ignition[1206]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:22:47.810135 ignition[1206]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:22:47.810135 ignition[1206]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:22:47.810135 ignition[1206]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:22:47.810135 ignition[1206]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:22:47.851169 ignition[1206]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:22:47.856546 ignition[1206]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:22:47.856546 ignition[1206]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:22:47.851538 unknown[1206]: wrote ssh authorized keys file for user: core Jun 20 18:22:47.870662 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 20 18:22:47.870662 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jun 20 18:22:47.905063 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:22:48.091045 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jun 20 18:22:48.098984 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:22:48.098984 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jun 20 18:22:48.570877 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:22:48.635365 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:22:48.642596 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:22:48.717594 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:22:48.717594 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:22:48.717594 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jun 20 18:22:49.400382 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:22:49.600540 ignition[1206]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jun 20 18:22:49.600540 ignition[1206]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:22:49.616761 ignition[1206]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:22:49.624091 ignition[1206]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:22:49.624091 ignition[1206]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:22:49.624091 ignition[1206]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:22:49.624091 ignition[1206]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:22:49.624091 ignition[1206]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:22:49.624091 ignition[1206]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:22:49.624091 ignition[1206]: INFO : files: files passed Jun 20 18:22:49.624091 ignition[1206]: INFO : Ignition finished successfully Jun 20 18:22:49.631490 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:22:49.641437 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:22:49.663628 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:22:49.676102 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:22:49.676188 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:22:49.707576 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:22:49.707576 initrd-setup-root-after-ignition[1235]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:22:49.724677 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:22:49.710179 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:22:49.718710 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:22:49.729771 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:22:49.786747 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:22:49.786842 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:22:49.795628 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:22:49.804057 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:22:49.811526 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:22:49.812212 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:22:49.843506 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:22:49.849752 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:22:49.871959 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:22:49.876467 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:22:49.885138 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:22:49.892949 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:22:49.893051 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:22:49.904360 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:22:49.908345 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:22:49.916459 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:22:49.924150 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:22:49.931975 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:22:49.940329 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:22:49.948725 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:22:49.956318 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:22:49.965124 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:22:49.972686 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:22:49.981333 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:22:49.988233 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:22:49.988335 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:22:50.000592 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:22:50.004682 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:22:50.013130 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:22:50.016877 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:22:50.021555 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:22:50.021639 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:22:50.033803 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:22:50.033883 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:22:50.038712 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:22:50.038779 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:22:50.045594 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:22:50.045656 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:22:50.108251 ignition[1259]: INFO : Ignition 2.21.0 Jun 20 18:22:50.108251 ignition[1259]: INFO : Stage: umount Jun 20 18:22:50.108251 ignition[1259]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:22:50.108251 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:22:50.108251 ignition[1259]: INFO : umount: umount passed Jun 20 18:22:50.108251 ignition[1259]: INFO : Ignition finished successfully Jun 20 18:22:50.056338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:22:50.068627 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:22:50.068739 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:22:50.088754 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:22:50.098978 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:22:50.103969 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:22:50.109467 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:22:50.109581 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:22:50.123600 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:22:50.123698 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:22:50.138458 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:22:50.140966 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:22:50.141133 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:22:50.147693 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:22:50.147735 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:22:50.155366 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:22:50.155395 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:22:50.163272 systemd[1]: Stopped target network.target - Network. Jun 20 18:22:50.171081 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:22:50.171146 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:22:50.178700 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:22:50.185676 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:22:50.194027 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:22:50.205875 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:22:50.213031 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:22:50.216563 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:22:50.216609 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:22:50.223594 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:22:50.223623 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:22:50.231959 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:22:50.232029 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:22:50.239503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:22:50.239534 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:22:50.248361 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:22:50.255748 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:22:50.265473 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:22:50.265558 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:22:50.278359 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:22:50.278561 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:22:50.280033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:22:50.291570 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:22:50.291772 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:22:50.291863 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:22:50.468903 kernel: hv_netvsc 002248bb-ff32-0022-48bb-ff32002248bb eth0: Data path switched from VF: enP62270s1 Jun 20 18:22:50.299894 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 18:22:50.307877 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:22:50.307921 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:22:50.326135 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:22:50.332641 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:22:50.332706 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:22:50.341228 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:22:50.341486 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:22:50.356480 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:22:50.356528 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:22:50.360918 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:22:50.360960 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:22:50.372980 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:22:50.385690 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:22:50.385747 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:22:50.386026 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:22:50.386105 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:22:50.398231 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:22:50.398357 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:22:50.412619 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:22:50.412728 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:22:50.418493 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:22:50.418558 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:22:50.427131 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:22:50.427158 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:22:50.434657 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:22:50.434700 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:22:50.446962 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:22:50.446989 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:22:50.454190 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:22:50.454217 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:22:50.469521 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:22:50.482674 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 18:22:50.482736 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:22:50.647841 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jun 20 18:22:50.496118 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:22:50.496161 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:22:50.509300 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:22:50.509342 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:22:50.517967 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 18:22:50.518034 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:22:50.518059 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:22:50.518314 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:22:50.518392 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:22:50.534488 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:22:50.534602 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:22:50.541990 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:22:50.551109 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:22:50.575235 systemd[1]: Switching root. Jun 20 18:22:50.706153 systemd-journald[225]: Journal stopped Jun 20 18:22:55.153890 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:22:55.153910 kernel: SELinux: policy capability open_perms=1 Jun 20 18:22:55.153917 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:22:55.153922 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:22:55.153929 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:22:55.153934 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:22:55.153940 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:22:55.153946 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:22:55.153951 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 18:22:55.153956 kernel: audit: type=1403 audit(1750443772.233:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:22:55.153963 systemd[1]: Successfully loaded SELinux policy in 113.489ms. Jun 20 18:22:55.153970 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.855ms. Jun 20 18:22:55.153977 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:22:55.153983 systemd[1]: Detected virtualization microsoft. Jun 20 18:22:55.153989 systemd[1]: Detected architecture arm64. Jun 20 18:22:55.153997 systemd[1]: Detected first boot. Jun 20 18:22:55.154021 systemd[1]: Hostname set to . Jun 20 18:22:55.154027 systemd[1]: Initializing machine ID from random generator. Jun 20 18:22:55.154032 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:22:55.154038 zram_generator::config[1302]: No configuration found. Jun 20 18:22:55.154045 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:22:55.154051 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:22:55.154058 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:22:55.154064 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:22:55.154070 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:22:55.154077 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:22:55.154083 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:22:55.154090 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:22:55.154096 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:22:55.154103 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:22:55.154109 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:22:55.154115 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:22:55.154122 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:22:55.154128 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:22:55.154134 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:22:55.154140 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:22:55.154146 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:22:55.154152 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:22:55.154160 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:22:55.154166 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 20 18:22:55.154173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:22:55.154180 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:22:55.154186 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:22:55.154192 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:22:55.154198 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:22:55.154205 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:22:55.154211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:22:55.154217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:22:55.154223 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:22:55.154229 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:22:55.154236 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:22:55.154242 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:22:55.154249 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:22:55.154255 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:22:55.154261 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:22:55.154267 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:22:55.154273 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:22:55.154280 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:22:55.154287 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:22:55.154293 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:22:55.154299 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:22:55.154305 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:22:55.154311 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:22:55.154318 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:22:55.154324 systemd[1]: Reached target machines.target - Containers. Jun 20 18:22:55.154330 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:22:55.154337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:22:55.154343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:22:55.154349 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:22:55.154356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:22:55.154363 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:22:55.154369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:22:55.154375 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:22:55.154381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:22:55.154388 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:22:55.154395 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:22:55.154401 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:22:55.154407 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:22:55.154413 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:22:55.154420 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:22:55.154426 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:22:55.154432 kernel: fuse: init (API version 7.41) Jun 20 18:22:55.154438 kernel: loop: module loaded Jun 20 18:22:55.154444 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:22:55.154451 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:22:55.154457 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:22:55.154463 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:22:55.154469 kernel: ACPI: bus type drm_connector registered Jun 20 18:22:55.154475 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:22:55.154493 systemd-journald[1403]: Collecting audit messages is disabled. Jun 20 18:22:55.154510 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:22:55.154518 systemd[1]: Stopped verity-setup.service. Jun 20 18:22:55.154525 systemd-journald[1403]: Journal started Jun 20 18:22:55.154540 systemd-journald[1403]: Runtime Journal (/run/log/journal/bebae3a5a3ef4bd687e9ffa88b351f69) is 8M, max 78.5M, 70.5M free. Jun 20 18:22:54.472304 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:22:54.479438 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:22:54.479808 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:22:54.480088 systemd[1]: systemd-journald.service: Consumed 2.252s CPU time. Jun 20 18:22:55.167835 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:22:55.168405 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:22:55.172295 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:22:55.176683 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:22:55.180428 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:22:55.184630 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:22:55.188968 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:22:55.192905 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:22:55.197785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:22:55.202783 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:22:55.202910 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:22:55.208454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:22:55.208576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:22:55.213047 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:22:55.213165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:22:55.217290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:22:55.217406 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:22:55.222391 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:22:55.222519 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:22:55.226896 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:22:55.227025 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:22:55.231467 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:22:55.236142 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:22:55.241503 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:22:55.246773 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:22:55.251913 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:22:55.266729 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:22:55.272518 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:22:55.281590 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:22:55.285998 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:22:55.286030 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:22:55.290487 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:22:55.298109 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:22:55.302462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:22:55.308595 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:22:55.315123 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:22:55.319685 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:22:55.320333 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:22:55.324406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:22:55.325074 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:22:55.330406 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:22:55.337468 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:22:55.343186 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:22:55.352053 systemd-journald[1403]: Time spent on flushing to /var/log/journal/bebae3a5a3ef4bd687e9ffa88b351f69 is 43.782ms for 941 entries. Jun 20 18:22:55.352053 systemd-journald[1403]: System Journal (/var/log/journal/bebae3a5a3ef4bd687e9ffa88b351f69) is 11.8M, max 2.6G, 2.6G free. Jun 20 18:22:55.443513 systemd-journald[1403]: Received client request to flush runtime journal. Jun 20 18:22:55.443569 systemd-journald[1403]: /var/log/journal/bebae3a5a3ef4bd687e9ffa88b351f69/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 20 18:22:55.443587 kernel: loop0: detected capacity change from 0 to 28936 Jun 20 18:22:55.443615 systemd-journald[1403]: Rotating system journal. Jun 20 18:22:55.349998 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:22:55.371308 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:22:55.378433 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:22:55.394195 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:22:55.444739 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:22:55.453256 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:22:55.487407 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:22:55.488694 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:22:55.636404 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:22:55.643143 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:22:55.731727 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jun 20 18:22:55.732079 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jun 20 18:22:55.737653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:22:55.742555 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:22:55.777022 kernel: loop1: detected capacity change from 0 to 207008 Jun 20 18:22:55.829036 kernel: loop2: detected capacity change from 0 to 107312 Jun 20 18:22:56.232590 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:22:56.238877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:22:56.273891 systemd-udevd[1466]: Using default interface naming scheme 'v255'. Jun 20 18:22:56.367033 kernel: loop3: detected capacity change from 0 to 138376 Jun 20 18:22:56.484357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:22:56.493746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:22:56.543364 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:22:56.656170 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:22:56.664167 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 20 18:22:56.720039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#194 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:22:56.727037 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:22:56.827208 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 18:22:56.827306 kernel: hv_vmbus: registering driver hv_balloon Jun 20 18:22:56.827321 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 18:22:56.827331 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 18:22:56.838050 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 18:22:56.838138 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 20 18:22:56.844277 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:22:56.843805 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:22:56.844794 systemd-networkd[1482]: lo: Link UP Jun 20 18:22:56.845119 systemd-networkd[1482]: lo: Gained carrier Jun 20 18:22:56.847067 systemd-networkd[1482]: Enumeration completed Jun 20 18:22:56.847514 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:22:56.847592 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:22:56.854318 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:22:56.855487 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:22:56.925748 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:22:56.933269 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:22:56.955964 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:22:56.956058 kernel: loop4: detected capacity change from 0 to 28936 Jun 20 18:22:56.956320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:22:56.965157 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:22:56.968279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:22:56.976030 kernel: mlx5_core f33e:00:02.0 enP62270s1: Link up Jun 20 18:22:56.976267 kernel: loop5: detected capacity change from 0 to 207008 Jun 20 18:22:56.985013 kernel: loop6: detected capacity change from 0 to 107312 Jun 20 18:22:56.998069 kernel: loop7: detected capacity change from 0 to 138376 Jun 20 18:22:56.998135 kernel: hv_netvsc 002248bb-ff32-0022-48bb-ff32002248bb eth0: Data path switched to VF: enP62270s1 Jun 20 18:22:56.999029 systemd-networkd[1482]: enP62270s1: Link UP Jun 20 18:22:56.999227 systemd-networkd[1482]: eth0: Link UP Jun 20 18:22:56.999282 systemd-networkd[1482]: eth0: Gained carrier Jun 20 18:22:56.999330 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:22:57.008638 (sd-merge)[1547]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 18:22:57.009239 (sd-merge)[1547]: Merged extensions into '/usr'. Jun 20 18:22:57.010537 systemd-networkd[1482]: enP62270s1: Gained carrier Jun 20 18:22:57.017063 systemd-networkd[1482]: eth0: DHCPv4 address 10.200.20.48/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:22:57.018073 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:22:57.024714 systemd[1]: Reload requested from client PID 1441 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:22:57.024728 systemd[1]: Reloading... Jun 20 18:22:57.031019 kernel: MACsec IEEE 802.1AE Jun 20 18:22:57.082970 zram_generator::config[1586]: No configuration found. Jun 20 18:22:57.176822 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:22:57.278879 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:22:57.284359 systemd[1]: Reloading finished in 259 ms. Jun 20 18:22:57.307324 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:22:57.339517 systemd[1]: Starting ensure-sysext.service... Jun 20 18:22:57.346232 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:22:57.355195 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:22:57.371172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:22:57.378462 systemd[1]: Reload requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:22:57.378602 systemd[1]: Reloading... Jun 20 18:22:57.385342 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 18:22:57.385364 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 18:22:57.385570 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:22:57.385706 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:22:57.386190 systemd-tmpfiles[1697]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:22:57.386332 systemd-tmpfiles[1697]: ACLs are not supported, ignoring. Jun 20 18:22:57.386362 systemd-tmpfiles[1697]: ACLs are not supported, ignoring. Jun 20 18:22:57.405435 systemd-tmpfiles[1697]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:22:57.405450 systemd-tmpfiles[1697]: Skipping /boot Jun 20 18:22:57.415459 systemd-tmpfiles[1697]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:22:57.416072 systemd-tmpfiles[1697]: Skipping /boot Jun 20 18:22:57.462118 zram_generator::config[1727]: No configuration found. Jun 20 18:22:57.548733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:22:57.631845 systemd[1]: Reloading finished in 252 ms. Jun 20 18:22:57.642074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:22:57.657816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:22:57.667768 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:22:57.678707 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:22:57.684583 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:22:57.693186 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:22:57.697878 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:22:57.706181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:22:57.709258 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:22:57.716180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:22:57.722323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:22:57.727320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:22:57.727418 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:22:57.732649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:22:57.732780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:22:57.732868 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:22:57.735776 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:22:57.741113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:22:57.743154 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:22:57.748360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:22:57.748524 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:22:57.754246 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:22:57.754377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:22:57.761119 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:22:57.761247 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:22:57.763276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:22:57.765015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:22:57.774421 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:22:57.783656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:22:57.797194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:22:57.803365 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:22:57.803468 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:22:57.803569 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:22:57.808775 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:22:57.808993 systemd-resolved[1792]: Positive Trust Anchors: Jun 20 18:22:57.809219 systemd-resolved[1792]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:22:57.809292 systemd-resolved[1792]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:22:57.811752 systemd-resolved[1792]: Using system hostname 'ci-4344.1.0-a-a1e4bb5c79'. Jun 20 18:22:57.814017 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:22:57.819246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:22:57.819388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:22:57.824514 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:22:57.824635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:22:57.829085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:22:57.829199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:22:57.834760 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:22:57.834872 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:22:57.843110 systemd[1]: Finished ensure-sysext.service. Jun 20 18:22:57.848616 systemd[1]: Reached target network.target - Network. Jun 20 18:22:57.852297 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:22:57.856911 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:22:57.856951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:22:57.869660 augenrules[1831]: No rules Jun 20 18:22:57.870893 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:22:57.871092 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:22:58.080329 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:22:58.086440 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:22:58.404195 systemd-networkd[1482]: eth0: Gained IPv6LL Jun 20 18:22:58.406181 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:22:58.411300 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:22:58.788153 systemd-networkd[1482]: enP62270s1: Gained IPv6LL Jun 20 18:23:02.226956 ldconfig[1436]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:23:02.238808 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:23:02.246159 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:23:02.262583 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:23:02.266859 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:23:02.270765 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:23:02.275151 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:23:02.279978 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:23:02.284020 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:23:02.288574 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:23:02.293406 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:23:02.293428 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:23:02.296703 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:23:02.300813 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:23:02.306031 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:23:02.311652 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:23:02.316558 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:23:02.321039 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:23:02.335636 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:23:02.340120 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:23:02.345046 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:23:02.349113 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:23:02.352550 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:23:02.356011 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:23:02.356029 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:23:02.409334 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 18:23:02.420102 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:23:02.425124 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:23:02.431117 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:23:02.437587 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:23:02.445064 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:23:02.449499 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:23:02.455195 (chronyd)[1844]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 18:23:02.455238 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:23:02.457582 jq[1852]: false Jun 20 18:23:02.458136 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 18:23:02.462146 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 18:23:02.462916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:02.469810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:23:02.475930 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:23:02.481472 KVP[1854]: KVP starting; pid is:1854 Jun 20 18:23:02.482552 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:23:02.489018 kernel: hv_utils: KVP IC version 4.0 Jun 20 18:23:02.489927 KVP[1854]: KVP LIC Version: 3.1 Jun 20 18:23:02.490975 chronyd[1865]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 18:23:02.491461 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:23:02.501128 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:23:02.508992 chronyd[1865]: Timezone right/UTC failed leap second check, ignoring Jun 20 18:23:02.509414 chronyd[1865]: Loaded seccomp filter (level 2) Jun 20 18:23:02.509783 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:23:02.517078 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:23:02.517398 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:23:02.518328 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:23:02.524928 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:23:02.531879 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 18:23:02.535604 extend-filesystems[1853]: Found /dev/sda6 Jun 20 18:23:02.541348 jq[1877]: true Jun 20 18:23:02.539170 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:23:02.547359 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:23:02.547850 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:23:02.548988 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:23:02.549175 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:23:02.557173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:23:02.557737 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:23:02.570189 extend-filesystems[1853]: Found /dev/sda9 Jun 20 18:23:02.582261 extend-filesystems[1853]: Checking size of /dev/sda9 Jun 20 18:23:02.586262 (ntainerd)[1886]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:23:02.599021 jq[1885]: true Jun 20 18:23:02.599211 update_engine[1875]: I20250620 18:23:02.596290 1875 main.cc:92] Flatcar Update Engine starting Jun 20 18:23:02.612923 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:23:02.628916 tar[1884]: linux-arm64/LICENSE Jun 20 18:23:02.629294 tar[1884]: linux-arm64/helm Jun 20 18:23:02.634309 extend-filesystems[1853]: Old size kept for /dev/sda9 Jun 20 18:23:02.637753 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:23:02.637939 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:23:02.666440 systemd-logind[1869]: New seat seat0. Jun 20 18:23:02.669328 systemd-logind[1869]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:23:02.669666 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:23:02.707820 bash[1919]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:23:02.711049 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:23:02.720211 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 18:23:02.761879 dbus-daemon[1847]: [system] SELinux support is enabled Jun 20 18:23:02.762069 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:23:02.771441 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:23:02.771877 dbus-daemon[1847]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 18:23:02.771789 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:23:02.777979 update_engine[1875]: I20250620 18:23:02.777935 1875 update_check_scheduler.cc:74] Next update check in 3m55s Jun 20 18:23:02.779729 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:23:02.779749 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:23:02.785629 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:23:02.796273 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:23:02.813075 coreos-metadata[1846]: Jun 20 18:23:02.811 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:23:02.815420 coreos-metadata[1846]: Jun 20 18:23:02.815 INFO Fetch successful Jun 20 18:23:02.815420 coreos-metadata[1846]: Jun 20 18:23:02.815 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 18:23:02.819895 coreos-metadata[1846]: Jun 20 18:23:02.819 INFO Fetch successful Jun 20 18:23:02.820532 coreos-metadata[1846]: Jun 20 18:23:02.820 INFO Fetching http://168.63.129.16/machine/39da2764-3fcd-4d4c-99c0-ef5c776fd875/60b5b442%2D8e9b%2D4b9a%2D9dda%2D8fccc8e8ba2d.%5Fci%2D4344.1.0%2Da%2Da1e4bb5c79?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 18:23:02.824028 coreos-metadata[1846]: Jun 20 18:23:02.823 INFO Fetch successful Jun 20 18:23:02.824028 coreos-metadata[1846]: Jun 20 18:23:02.823 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:23:02.835687 coreos-metadata[1846]: Jun 20 18:23:02.835 INFO Fetch successful Jun 20 18:23:02.888437 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:23:02.895581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:23:03.037703 sshd_keygen[1876]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:23:03.078668 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:23:03.087809 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:23:03.096551 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 18:23:03.130318 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:23:03.131723 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:23:03.133409 locksmithd[1946]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:23:03.140248 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:23:03.156142 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 18:23:03.175745 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:23:03.187027 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:23:03.196444 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 20 18:23:03.205469 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:23:03.284765 tar[1884]: linux-arm64/README.md Jun 20 18:23:03.301257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:03.308098 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:03.311681 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:23:03.332595 containerd[1886]: time="2025-06-20T18:23:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 18:23:03.333670 containerd[1886]: time="2025-06-20T18:23:03.333641832Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 18:23:03.339647 containerd[1886]: time="2025-06-20T18:23:03.339615512Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.568µs" Jun 20 18:23:03.339647 containerd[1886]: time="2025-06-20T18:23:03.339642816Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 18:23:03.339725 containerd[1886]: time="2025-06-20T18:23:03.339656856Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 18:23:03.339809 containerd[1886]: time="2025-06-20T18:23:03.339793176Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 18:23:03.339809 containerd[1886]: time="2025-06-20T18:23:03.339808232Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 18:23:03.339842 containerd[1886]: time="2025-06-20T18:23:03.339825752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:23:03.339872 containerd[1886]: time="2025-06-20T18:23:03.339860856Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:23:03.339872 containerd[1886]: time="2025-06-20T18:23:03.339870560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:23:03.340077 containerd[1886]: time="2025-06-20T18:23:03.340061664Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:23:03.340077 containerd[1886]: time="2025-06-20T18:23:03.340074272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:23:03.340129 containerd[1886]: time="2025-06-20T18:23:03.340081752Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:23:03.340129 containerd[1886]: time="2025-06-20T18:23:03.340087448Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 18:23:03.340160 containerd[1886]: time="2025-06-20T18:23:03.340150784Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 18:23:03.340315 containerd[1886]: time="2025-06-20T18:23:03.340298168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:23:03.340331 containerd[1886]: time="2025-06-20T18:23:03.340323744Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:23:03.340346 containerd[1886]: time="2025-06-20T18:23:03.340331112Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 18:23:03.340362 containerd[1886]: time="2025-06-20T18:23:03.340355576Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 18:23:03.340837 containerd[1886]: time="2025-06-20T18:23:03.340495176Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 18:23:03.340837 containerd[1886]: time="2025-06-20T18:23:03.340585712Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:23:03.353282 containerd[1886]: time="2025-06-20T18:23:03.353251448Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353706984Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353728240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353747696Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353758624Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353765296Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353774760Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353782872Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353790112Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353796632Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353802208Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353810328Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353917848Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353932160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 18:23:03.354032 containerd[1886]: time="2025-06-20T18:23:03.353941432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 18:23:03.354247 containerd[1886]: time="2025-06-20T18:23:03.353948472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 18:23:03.354247 containerd[1886]: time="2025-06-20T18:23:03.353954448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 18:23:03.354247 containerd[1886]: time="2025-06-20T18:23:03.353961000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 18:23:03.354247 containerd[1886]: time="2025-06-20T18:23:03.353970576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 18:23:03.354247 containerd[1886]: time="2025-06-20T18:23:03.353977032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 18:23:03.354247 containerd[1886]: time="2025-06-20T18:23:03.353983944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 18:23:03.354247 containerd[1886]: time="2025-06-20T18:23:03.353990688Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 18:23:03.354472 containerd[1886]: time="2025-06-20T18:23:03.353997608Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 18:23:03.354891 containerd[1886]: time="2025-06-20T18:23:03.354869224Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 18:23:03.354906 containerd[1886]: time="2025-06-20T18:23:03.354899896Z" level=info msg="Start snapshots syncer" Jun 20 18:23:03.354935 containerd[1886]: time="2025-06-20T18:23:03.354924616Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 18:23:03.355137 containerd[1886]: time="2025-06-20T18:23:03.355109944Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 18:23:03.355219 containerd[1886]: time="2025-06-20T18:23:03.355146840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 18:23:03.355219 containerd[1886]: time="2025-06-20T18:23:03.355199456Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 18:23:03.355309 containerd[1886]: time="2025-06-20T18:23:03.355291256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 18:23:03.355331 containerd[1886]: time="2025-06-20T18:23:03.355311360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 18:23:03.355331 containerd[1886]: time="2025-06-20T18:23:03.355318992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 18:23:03.355331 containerd[1886]: time="2025-06-20T18:23:03.355325192Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 18:23:03.355373 containerd[1886]: time="2025-06-20T18:23:03.355333512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 18:23:03.355373 containerd[1886]: time="2025-06-20T18:23:03.355340432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 18:23:03.355373 containerd[1886]: time="2025-06-20T18:23:03.355347000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 18:23:03.355373 containerd[1886]: time="2025-06-20T18:23:03.355364880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 18:23:03.355373 containerd[1886]: time="2025-06-20T18:23:03.355372104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 18:23:03.355443 containerd[1886]: time="2025-06-20T18:23:03.355378496Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 18:23:03.355443 containerd[1886]: time="2025-06-20T18:23:03.355403280Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:23:03.355443 containerd[1886]: time="2025-06-20T18:23:03.355412704Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:23:03.355443 containerd[1886]: time="2025-06-20T18:23:03.355418128Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:23:03.355443 containerd[1886]: time="2025-06-20T18:23:03.355423704Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:23:03.355443 containerd[1886]: time="2025-06-20T18:23:03.355428272Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 18:23:03.355443 containerd[1886]: time="2025-06-20T18:23:03.355433880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 18:23:03.355597 containerd[1886]: time="2025-06-20T18:23:03.355441280Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 18:23:03.355597 containerd[1886]: time="2025-06-20T18:23:03.355457936Z" level=info msg="runtime interface created" Jun 20 18:23:03.355597 containerd[1886]: time="2025-06-20T18:23:03.355461048Z" level=info msg="created NRI interface" Jun 20 18:23:03.355597 containerd[1886]: time="2025-06-20T18:23:03.355468968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 18:23:03.355597 containerd[1886]: time="2025-06-20T18:23:03.355476256Z" level=info msg="Connect containerd service" Jun 20 18:23:03.355597 containerd[1886]: time="2025-06-20T18:23:03.355493088Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:23:03.360633 containerd[1886]: time="2025-06-20T18:23:03.360598576Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:23:03.557510 kubelet[2035]: E0620 18:23:03.557383 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:03.559407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:03.559518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:03.560046 systemd[1]: kubelet.service: Consumed 537ms CPU time, 256M memory peak. Jun 20 18:23:04.196027 containerd[1886]: time="2025-06-20T18:23:04.195942376Z" level=info msg="Start subscribing containerd event" Jun 20 18:23:04.196027 containerd[1886]: time="2025-06-20T18:23:04.196027360Z" level=info msg="Start recovering state" Jun 20 18:23:04.196181 containerd[1886]: time="2025-06-20T18:23:04.196106232Z" level=info msg="Start event monitor" Jun 20 18:23:04.196181 containerd[1886]: time="2025-06-20T18:23:04.196116384Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:23:04.196181 containerd[1886]: time="2025-06-20T18:23:04.196121952Z" level=info msg="Start streaming server" Jun 20 18:23:04.196181 containerd[1886]: time="2025-06-20T18:23:04.196129304Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 18:23:04.196181 containerd[1886]: time="2025-06-20T18:23:04.196134168Z" level=info msg="runtime interface starting up..." Jun 20 18:23:04.196181 containerd[1886]: time="2025-06-20T18:23:04.196138224Z" level=info msg="starting plugins..." Jun 20 18:23:04.196181 containerd[1886]: time="2025-06-20T18:23:04.196148712Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 18:23:04.196486 containerd[1886]: time="2025-06-20T18:23:04.196464512Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:23:04.196509 containerd[1886]: time="2025-06-20T18:23:04.196501112Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:23:04.200178 containerd[1886]: time="2025-06-20T18:23:04.196543176Z" level=info msg="containerd successfully booted in 0.864291s" Jun 20 18:23:04.196681 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:23:04.201864 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:23:04.207712 systemd[1]: Startup finished in 1.629s (kernel) + 12.982s (initrd) + 12.086s (userspace) = 26.698s. Jun 20 18:23:04.522790 login[2027]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jun 20 18:23:04.523150 login[2025]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:04.528222 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:23:04.529111 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:23:04.534419 systemd-logind[1869]: New session 2 of user core. Jun 20 18:23:04.569018 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:23:04.570991 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:23:04.582478 (systemd)[2063]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:23:04.584352 systemd-logind[1869]: New session c1 of user core. Jun 20 18:23:04.682959 systemd[2063]: Queued start job for default target default.target. Jun 20 18:23:04.700153 systemd[2063]: Created slice app.slice - User Application Slice. Jun 20 18:23:04.700175 systemd[2063]: Reached target paths.target - Paths. Jun 20 18:23:04.700204 systemd[2063]: Reached target timers.target - Timers. Jun 20 18:23:04.702112 systemd[2063]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:23:04.712591 systemd[2063]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:23:04.712755 systemd[2063]: Reached target sockets.target - Sockets. Jun 20 18:23:04.712858 systemd[2063]: Reached target basic.target - Basic System. Jun 20 18:23:04.713045 systemd[2063]: Reached target default.target - Main User Target. Jun 20 18:23:04.713136 systemd[2063]: Startup finished in 124ms. Jun 20 18:23:04.713261 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:23:04.720379 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:23:04.931936 waagent[2019]: 2025-06-20T18:23:04.931799Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 18:23:04.935937 waagent[2019]: 2025-06-20T18:23:04.935897Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 18:23:04.939225 waagent[2019]: 2025-06-20T18:23:04.939190Z INFO Daemon Daemon Python: 3.11.12 Jun 20 18:23:04.942383 waagent[2019]: 2025-06-20T18:23:04.942321Z INFO Daemon Daemon Run daemon Jun 20 18:23:04.945147 waagent[2019]: 2025-06-20T18:23:04.945116Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 18:23:04.951760 waagent[2019]: 2025-06-20T18:23:04.951578Z INFO Daemon Daemon Using waagent for provisioning Jun 20 18:23:04.955269 waagent[2019]: 2025-06-20T18:23:04.955238Z INFO Daemon Daemon Activate resource disk Jun 20 18:23:04.958430 waagent[2019]: 2025-06-20T18:23:04.958405Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 18:23:04.966331 waagent[2019]: 2025-06-20T18:23:04.966299Z INFO Daemon Daemon Found device: None Jun 20 18:23:04.969274 waagent[2019]: 2025-06-20T18:23:04.969248Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 18:23:04.974713 waagent[2019]: 2025-06-20T18:23:04.974692Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 18:23:04.982615 waagent[2019]: 2025-06-20T18:23:04.982578Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:23:04.986648 waagent[2019]: 2025-06-20T18:23:04.986620Z INFO Daemon Daemon Running default provisioning handler Jun 20 18:23:04.995085 waagent[2019]: 2025-06-20T18:23:04.995046Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 18:23:05.004686 waagent[2019]: 2025-06-20T18:23:05.004650Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 18:23:05.011464 waagent[2019]: 2025-06-20T18:23:05.011436Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 18:23:05.014989 waagent[2019]: 2025-06-20T18:23:05.014969Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 18:23:05.099029 waagent[2019]: 2025-06-20T18:23:05.098906Z INFO Daemon Daemon Successfully mounted dvd Jun 20 18:23:05.109636 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 18:23:05.111046 waagent[2019]: 2025-06-20T18:23:05.110702Z INFO Daemon Daemon Detect protocol endpoint Jun 20 18:23:05.114043 waagent[2019]: 2025-06-20T18:23:05.113998Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:23:05.117829 waagent[2019]: 2025-06-20T18:23:05.117800Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 18:23:05.122179 waagent[2019]: 2025-06-20T18:23:05.122158Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 18:23:05.125792 waagent[2019]: 2025-06-20T18:23:05.125763Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 18:23:05.129266 waagent[2019]: 2025-06-20T18:23:05.129243Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 18:23:05.180383 waagent[2019]: 2025-06-20T18:23:05.180347Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 18:23:05.185171 waagent[2019]: 2025-06-20T18:23:05.185120Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 18:23:05.188877 waagent[2019]: 2025-06-20T18:23:05.188853Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 18:23:05.325031 waagent[2019]: 2025-06-20T18:23:05.321129Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 18:23:05.325889 waagent[2019]: 2025-06-20T18:23:05.325850Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 18:23:05.333303 waagent[2019]: 2025-06-20T18:23:05.333267Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:23:05.390816 waagent[2019]: 2025-06-20T18:23:05.390782Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 18:23:05.394838 waagent[2019]: 2025-06-20T18:23:05.394806Z INFO Daemon Jun 20 18:23:05.396802 waagent[2019]: 2025-06-20T18:23:05.396777Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a078fdf8-1316-4f51-a543-30e2d90d444b eTag: 12054892746425823639 source: Fabric] Jun 20 18:23:05.405114 waagent[2019]: 2025-06-20T18:23:05.405086Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 18:23:05.409677 waagent[2019]: 2025-06-20T18:23:05.409650Z INFO Daemon Jun 20 18:23:05.411616 waagent[2019]: 2025-06-20T18:23:05.411594Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:23:05.420129 waagent[2019]: 2025-06-20T18:23:05.420103Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 18:23:05.524458 login[2027]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:05.530399 systemd-logind[1869]: New session 1 of user core. Jun 20 18:23:05.535128 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:23:05.547770 waagent[2019]: 2025-06-20T18:23:05.547718Z INFO Daemon Downloaded certificate {'thumbprint': 'B1007039B56CFC692A85BB205207DA69FF682917', 'hasPrivateKey': False} Jun 20 18:23:05.554623 waagent[2019]: 2025-06-20T18:23:05.554591Z INFO Daemon Downloaded certificate {'thumbprint': '4CF427BA933D07DA7A91F333EAB25D9B354E0529', 'hasPrivateKey': True} Jun 20 18:23:05.561226 waagent[2019]: 2025-06-20T18:23:05.561192Z INFO Daemon Fetch goal state completed Jun 20 18:23:05.603800 waagent[2019]: 2025-06-20T18:23:05.603745Z INFO Daemon Daemon Starting provisioning Jun 20 18:23:05.607924 waagent[2019]: 2025-06-20T18:23:05.607885Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 18:23:05.611678 waagent[2019]: 2025-06-20T18:23:05.611634Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-a1e4bb5c79] Jun 20 18:23:05.632632 waagent[2019]: 2025-06-20T18:23:05.632590Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-a1e4bb5c79] Jun 20 18:23:05.637079 waagent[2019]: 2025-06-20T18:23:05.637047Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 18:23:05.641521 waagent[2019]: 2025-06-20T18:23:05.641492Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 18:23:05.650340 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:23:05.650348 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:23:05.650395 systemd-networkd[1482]: eth0: DHCP lease lost Jun 20 18:23:05.651267 waagent[2019]: 2025-06-20T18:23:05.651221Z INFO Daemon Daemon Create user account if not exists Jun 20 18:23:05.654964 waagent[2019]: 2025-06-20T18:23:05.654934Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 18:23:05.658892 waagent[2019]: 2025-06-20T18:23:05.658854Z INFO Daemon Daemon Configure sudoer Jun 20 18:23:05.665531 waagent[2019]: 2025-06-20T18:23:05.665490Z INFO Daemon Daemon Configure sshd Jun 20 18:23:05.671695 waagent[2019]: 2025-06-20T18:23:05.671655Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 18:23:05.680193 waagent[2019]: 2025-06-20T18:23:05.680164Z INFO Daemon Daemon Deploy ssh public key. Jun 20 18:23:05.681039 systemd-networkd[1482]: eth0: DHCPv4 address 10.200.20.48/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:23:06.800270 waagent[2019]: 2025-06-20T18:23:06.800210Z INFO Daemon Daemon Provisioning complete Jun 20 18:23:06.813583 waagent[2019]: 2025-06-20T18:23:06.813551Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 18:23:06.818025 waagent[2019]: 2025-06-20T18:23:06.817989Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 18:23:06.824711 waagent[2019]: 2025-06-20T18:23:06.824688Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 18:23:06.919293 waagent[2117]: 2025-06-20T18:23:06.919231Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 18:23:06.919570 waagent[2117]: 2025-06-20T18:23:06.919348Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 18:23:06.919570 waagent[2117]: 2025-06-20T18:23:06.919384Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 18:23:06.919570 waagent[2117]: 2025-06-20T18:23:06.919417Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jun 20 18:23:06.962039 waagent[2117]: 2025-06-20T18:23:06.961458Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 18:23:06.962039 waagent[2117]: 2025-06-20T18:23:06.961650Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:23:06.962039 waagent[2117]: 2025-06-20T18:23:06.961695Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:23:06.967370 waagent[2117]: 2025-06-20T18:23:06.967322Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:23:06.972295 waagent[2117]: 2025-06-20T18:23:06.972266Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 18:23:06.972649 waagent[2117]: 2025-06-20T18:23:06.972619Z INFO ExtHandler Jun 20 18:23:06.972699 waagent[2117]: 2025-06-20T18:23:06.972682Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e83a2895-ae2b-433e-b68a-4c0a494ff4f0 eTag: 12054892746425823639 source: Fabric] Jun 20 18:23:06.972913 waagent[2117]: 2025-06-20T18:23:06.972887Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 18:23:06.973326 waagent[2117]: 2025-06-20T18:23:06.973296Z INFO ExtHandler Jun 20 18:23:06.973365 waagent[2117]: 2025-06-20T18:23:06.973349Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:23:06.976649 waagent[2117]: 2025-06-20T18:23:06.976621Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 18:23:07.031586 waagent[2117]: 2025-06-20T18:23:07.031525Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B1007039B56CFC692A85BB205207DA69FF682917', 'hasPrivateKey': False} Jun 20 18:23:07.031864 waagent[2117]: 2025-06-20T18:23:07.031833Z INFO ExtHandler Downloaded certificate {'thumbprint': '4CF427BA933D07DA7A91F333EAB25D9B354E0529', 'hasPrivateKey': True} Jun 20 18:23:07.032187 waagent[2117]: 2025-06-20T18:23:07.032157Z INFO ExtHandler Fetch goal state completed Jun 20 18:23:07.043742 waagent[2117]: 2025-06-20T18:23:07.043700Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 18:23:07.046946 waagent[2117]: 2025-06-20T18:23:07.046904Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2117 Jun 20 18:23:07.047067 waagent[2117]: 2025-06-20T18:23:07.047041Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 18:23:07.047315 waagent[2117]: 2025-06-20T18:23:07.047288Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 18:23:07.048374 waagent[2117]: 2025-06-20T18:23:07.048340Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 18:23:07.048685 waagent[2117]: 2025-06-20T18:23:07.048656Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 18:23:07.048789 waagent[2117]: 2025-06-20T18:23:07.048768Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 18:23:07.049233 waagent[2117]: 2025-06-20T18:23:07.049206Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 18:23:07.066492 waagent[2117]: 2025-06-20T18:23:07.066427Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 18:23:07.066588 waagent[2117]: 2025-06-20T18:23:07.066561Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 18:23:07.071032 waagent[2117]: 2025-06-20T18:23:07.070994Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 18:23:07.075422 systemd[1]: Reload requested from client PID 2134 ('systemctl') (unit waagent.service)... Jun 20 18:23:07.075634 systemd[1]: Reloading... Jun 20 18:23:07.146083 zram_generator::config[2196]: No configuration found. Jun 20 18:23:07.189187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:23:07.271474 systemd[1]: Reloading finished in 195 ms. Jun 20 18:23:07.282591 waagent[2117]: 2025-06-20T18:23:07.281923Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 18:23:07.282591 waagent[2117]: 2025-06-20T18:23:07.282084Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 18:23:07.620035 waagent[2117]: 2025-06-20T18:23:07.619257Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 18:23:07.620035 waagent[2117]: 2025-06-20T18:23:07.619570Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 18:23:07.620236 waagent[2117]: 2025-06-20T18:23:07.620191Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 18:23:07.620374 waagent[2117]: 2025-06-20T18:23:07.620333Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:23:07.620581 waagent[2117]: 2025-06-20T18:23:07.620553Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:23:07.620755 waagent[2117]: 2025-06-20T18:23:07.620722Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 18:23:07.621131 waagent[2117]: 2025-06-20T18:23:07.621086Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 18:23:07.621216 waagent[2117]: 2025-06-20T18:23:07.621183Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 18:23:07.621216 waagent[2117]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 18:23:07.621216 waagent[2117]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 18:23:07.621216 waagent[2117]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 18:23:07.621216 waagent[2117]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:23:07.621216 waagent[2117]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:23:07.621216 waagent[2117]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:23:07.621424 waagent[2117]: 2025-06-20T18:23:07.621404Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:23:07.621478 waagent[2117]: 2025-06-20T18:23:07.621456Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:23:07.621632 waagent[2117]: 2025-06-20T18:23:07.621597Z INFO EnvHandler ExtHandler Configure routes Jun 20 18:23:07.621672 waagent[2117]: 2025-06-20T18:23:07.621654Z INFO EnvHandler ExtHandler Gateway:None Jun 20 18:23:07.621744 waagent[2117]: 2025-06-20T18:23:07.621719Z INFO EnvHandler ExtHandler Routes:None Jun 20 18:23:07.622031 waagent[2117]: 2025-06-20T18:23:07.621973Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 18:23:07.622178 waagent[2117]: 2025-06-20T18:23:07.622133Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 18:23:07.622735 waagent[2117]: 2025-06-20T18:23:07.622678Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 18:23:07.622856 waagent[2117]: 2025-06-20T18:23:07.622750Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 18:23:07.622984 waagent[2117]: 2025-06-20T18:23:07.622906Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 18:23:07.632209 waagent[2117]: 2025-06-20T18:23:07.632165Z INFO ExtHandler ExtHandler Jun 20 18:23:07.632353 waagent[2117]: 2025-06-20T18:23:07.632325Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 48fa8c3e-c8e6-4cdf-8169-630b89e08000 correlation f2b40c3b-b0f9-476d-8e4d-260e0e3a29a1 created: 2025-06-20T18:21:53.318565Z] Jun 20 18:23:07.632733 waagent[2117]: 2025-06-20T18:23:07.632694Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 18:23:07.633246 waagent[2117]: 2025-06-20T18:23:07.633212Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jun 20 18:23:07.660044 waagent[2117]: 2025-06-20T18:23:07.659964Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 18:23:07.660044 waagent[2117]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 18:23:07.660347 waagent[2117]: 2025-06-20T18:23:07.660311Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 13E01BF2-E160-48B2-8BC4-EAF73B70FEEA;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 18:23:07.679161 waagent[2117]: 2025-06-20T18:23:07.679103Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 18:23:07.679161 waagent[2117]: Executing ['ip', '-a', '-o', 'link']: Jun 20 18:23:07.679161 waagent[2117]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 18:23:07.679161 waagent[2117]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:ff:32 brd ff:ff:ff:ff:ff:ff Jun 20 18:23:07.679161 waagent[2117]: 3: enP62270s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:ff:32 brd ff:ff:ff:ff:ff:ff\ altname enP62270p0s2 Jun 20 18:23:07.679161 waagent[2117]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 18:23:07.679161 waagent[2117]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 18:23:07.679161 waagent[2117]: 2: eth0 inet 10.200.20.48/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 18:23:07.679161 waagent[2117]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 18:23:07.679161 waagent[2117]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 18:23:07.679161 waagent[2117]: 2: eth0 inet6 fe80::222:48ff:febb:ff32/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:23:07.679161 waagent[2117]: 3: enP62270s1 inet6 fe80::222:48ff:febb:ff32/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:23:08.469399 waagent[2117]: 2025-06-20T18:23:08.469334Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 18:23:08.469399 waagent[2117]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:23:08.469399 waagent[2117]: pkts bytes target prot opt in out source destination Jun 20 18:23:08.469399 waagent[2117]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:23:08.469399 waagent[2117]: pkts bytes target prot opt in out source destination Jun 20 18:23:08.469399 waagent[2117]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:23:08.469399 waagent[2117]: pkts bytes target prot opt in out source destination Jun 20 18:23:08.469399 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:23:08.469399 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:23:08.469399 waagent[2117]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:23:08.471710 waagent[2117]: 2025-06-20T18:23:08.471666Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 18:23:08.471710 waagent[2117]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:23:08.471710 waagent[2117]: pkts bytes target prot opt in out source destination Jun 20 18:23:08.471710 waagent[2117]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:23:08.471710 waagent[2117]: pkts bytes target prot opt in out source destination Jun 20 18:23:08.471710 waagent[2117]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:23:08.471710 waagent[2117]: pkts bytes target prot opt in out source destination Jun 20 18:23:08.471710 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:23:08.471710 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:23:08.471710 waagent[2117]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:23:08.471892 waagent[2117]: 2025-06-20T18:23:08.471869Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 20 18:23:10.020907 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:23:10.021889 systemd[1]: Started sshd@0-10.200.20.48:22-10.200.16.10:36082.service - OpenSSH per-connection server daemon (10.200.16.10:36082). Jun 20 18:23:10.680372 sshd[2261]: Accepted publickey for core from 10.200.16.10 port 36082 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:23:10.682919 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:10.686539 systemd-logind[1869]: New session 3 of user core. Jun 20 18:23:10.701111 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:23:11.094200 systemd[1]: Started sshd@1-10.200.20.48:22-10.200.16.10:36098.service - OpenSSH per-connection server daemon (10.200.16.10:36098). Jun 20 18:23:11.564167 sshd[2266]: Accepted publickey for core from 10.200.16.10 port 36098 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:23:11.565106 sshd-session[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:11.568863 systemd-logind[1869]: New session 4 of user core. Jun 20 18:23:11.576210 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:23:11.915125 sshd[2268]: Connection closed by 10.200.16.10 port 36098 Jun 20 18:23:11.915739 sshd-session[2266]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:11.918883 systemd[1]: sshd@1-10.200.20.48:22-10.200.16.10:36098.service: Deactivated successfully. Jun 20 18:23:11.920646 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:23:11.921492 systemd-logind[1869]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:23:11.922872 systemd-logind[1869]: Removed session 4. Jun 20 18:23:12.009628 systemd[1]: Started sshd@2-10.200.20.48:22-10.200.16.10:36110.service - OpenSSH per-connection server daemon (10.200.16.10:36110). Jun 20 18:23:12.500937 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 36110 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:23:12.501979 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:12.505643 systemd-logind[1869]: New session 5 of user core. Jun 20 18:23:12.512288 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:23:12.845404 sshd[2276]: Connection closed by 10.200.16.10 port 36110 Jun 20 18:23:12.846038 sshd-session[2274]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:12.849163 systemd[1]: sshd@2-10.200.20.48:22-10.200.16.10:36110.service: Deactivated successfully. Jun 20 18:23:12.850525 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:23:12.851074 systemd-logind[1869]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:23:12.852212 systemd-logind[1869]: Removed session 5. Jun 20 18:23:12.936487 systemd[1]: Started sshd@3-10.200.20.48:22-10.200.16.10:36114.service - OpenSSH per-connection server daemon (10.200.16.10:36114). Jun 20 18:23:13.427259 sshd[2282]: Accepted publickey for core from 10.200.16.10 port 36114 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:23:13.428284 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:13.431955 systemd-logind[1869]: New session 6 of user core. Jun 20 18:23:13.439400 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:23:13.699540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:23:13.701672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:13.776339 sshd[2284]: Connection closed by 10.200.16.10 port 36114 Jun 20 18:23:13.777194 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:13.780143 systemd-logind[1869]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:23:13.781975 systemd[1]: sshd@3-10.200.20.48:22-10.200.16.10:36114.service: Deactivated successfully. Jun 20 18:23:13.783898 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:23:13.787450 systemd-logind[1869]: Removed session 6. Jun 20 18:23:13.801331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:13.809247 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:13.868481 systemd[1]: Started sshd@4-10.200.20.48:22-10.200.16.10:36120.service - OpenSSH per-connection server daemon (10.200.16.10:36120). Jun 20 18:23:13.903911 kubelet[2297]: E0620 18:23:13.903862 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:13.906649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:13.906869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:13.907384 systemd[1]: kubelet.service: Consumed 108ms CPU time, 107.3M memory peak. Jun 20 18:23:14.320120 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 36120 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:23:14.321256 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:14.324990 systemd-logind[1869]: New session 7 of user core. Jun 20 18:23:14.331106 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:23:14.672875 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:23:14.673121 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:14.703816 sudo[2308]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:14.783846 sshd[2307]: Connection closed by 10.200.16.10 port 36120 Jun 20 18:23:14.784555 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:14.787669 systemd-logind[1869]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:23:14.788578 systemd[1]: sshd@4-10.200.20.48:22-10.200.16.10:36120.service: Deactivated successfully. Jun 20 18:23:14.790019 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:23:14.791447 systemd-logind[1869]: Removed session 7. Jun 20 18:23:14.869651 systemd[1]: Started sshd@5-10.200.20.48:22-10.200.16.10:36130.service - OpenSSH per-connection server daemon (10.200.16.10:36130). Jun 20 18:23:15.322901 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 36130 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:23:15.323998 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:15.327276 systemd-logind[1869]: New session 8 of user core. Jun 20 18:23:15.334279 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:23:15.578277 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:23:15.578477 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:15.585686 sudo[2318]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:15.589080 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:23:15.589267 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:15.596585 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:23:15.625400 augenrules[2340]: No rules Jun 20 18:23:15.626639 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:23:15.626802 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:23:15.628729 sudo[2317]: pam_unix(sudo:session): session closed for user root Jun 20 18:23:15.708020 sshd[2316]: Connection closed by 10.200.16.10 port 36130 Jun 20 18:23:15.708554 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Jun 20 18:23:15.711767 systemd[1]: sshd@5-10.200.20.48:22-10.200.16.10:36130.service: Deactivated successfully. Jun 20 18:23:15.713239 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:23:15.713805 systemd-logind[1869]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:23:15.715155 systemd-logind[1869]: Removed session 8. Jun 20 18:23:15.803313 systemd[1]: Started sshd@6-10.200.20.48:22-10.200.16.10:36134.service - OpenSSH per-connection server daemon (10.200.16.10:36134). Jun 20 18:23:16.271509 sshd[2349]: Accepted publickey for core from 10.200.16.10 port 36134 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:23:16.272593 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:23:16.276155 systemd-logind[1869]: New session 9 of user core. Jun 20 18:23:16.283208 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:23:16.535821 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:23:16.536408 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:23:18.005163 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:23:18.016246 (dockerd)[2369]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:23:19.170353 dockerd[2369]: time="2025-06-20T18:23:19.170301560Z" level=info msg="Starting up" Jun 20 18:23:19.171547 dockerd[2369]: time="2025-06-20T18:23:19.171524912Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 18:23:19.301978 systemd[1]: var-lib-docker-metacopy\x2dcheck202174479-merged.mount: Deactivated successfully. Jun 20 18:23:19.316011 dockerd[2369]: time="2025-06-20T18:23:19.315936472Z" level=info msg="Loading containers: start." Jun 20 18:23:19.326031 kernel: Initializing XFRM netlink socket Jun 20 18:23:19.630170 systemd-networkd[1482]: docker0: Link UP Jun 20 18:23:19.643196 dockerd[2369]: time="2025-06-20T18:23:19.643116136Z" level=info msg="Loading containers: done." Jun 20 18:23:19.651839 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3281642066-merged.mount: Deactivated successfully. Jun 20 18:23:19.673980 dockerd[2369]: time="2025-06-20T18:23:19.673640672Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:23:19.673980 dockerd[2369]: time="2025-06-20T18:23:19.673725184Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 18:23:19.673980 dockerd[2369]: time="2025-06-20T18:23:19.673828912Z" level=info msg="Initializing buildkit" Jun 20 18:23:19.714897 dockerd[2369]: time="2025-06-20T18:23:19.714826096Z" level=info msg="Completed buildkit initialization" Jun 20 18:23:19.720409 dockerd[2369]: time="2025-06-20T18:23:19.720373904Z" level=info msg="Daemon has completed initialization" Jun 20 18:23:19.720506 dockerd[2369]: time="2025-06-20T18:23:19.720421584Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:23:19.720804 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:23:20.493740 containerd[1886]: time="2025-06-20T18:23:20.493693248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 18:23:21.444741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590869794.mount: Deactivated successfully. Jun 20 18:23:22.457100 containerd[1886]: time="2025-06-20T18:23:22.457047256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:22.460067 containerd[1886]: time="2025-06-20T18:23:22.460030360Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328194" Jun 20 18:23:22.463862 containerd[1886]: time="2025-06-20T18:23:22.463819648Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:22.467388 containerd[1886]: time="2025-06-20T18:23:22.467335936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:22.467972 containerd[1886]: time="2025-06-20T18:23:22.467782432Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.974045008s" Jun 20 18:23:22.467972 containerd[1886]: time="2025-06-20T18:23:22.467811760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jun 20 18:23:22.468606 containerd[1886]: time="2025-06-20T18:23:22.468577920Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 18:23:23.674915 containerd[1886]: time="2025-06-20T18:23:23.674864856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:23.681339 containerd[1886]: time="2025-06-20T18:23:23.681301608Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529228" Jun 20 18:23:23.689519 containerd[1886]: time="2025-06-20T18:23:23.689470568Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:23.695307 containerd[1886]: time="2025-06-20T18:23:23.695252528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:23.696056 containerd[1886]: time="2025-06-20T18:23:23.695720232Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.22711696s" Jun 20 18:23:23.696056 containerd[1886]: time="2025-06-20T18:23:23.695746424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jun 20 18:23:23.696243 containerd[1886]: time="2025-06-20T18:23:23.696195672Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 18:23:24.157133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:23:24.158466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:24.251718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:24.254330 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:24.337991 kubelet[2634]: E0620 18:23:24.337938 2634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:24.340220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:24.340436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:24.340913 systemd[1]: kubelet.service: Consumed 99ms CPU time, 104M memory peak. Jun 20 18:23:25.358511 containerd[1886]: time="2025-06-20T18:23:25.358460712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:25.360678 containerd[1886]: time="2025-06-20T18:23:25.360644192Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484141" Jun 20 18:23:25.365281 containerd[1886]: time="2025-06-20T18:23:25.365240000Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:25.369582 containerd[1886]: time="2025-06-20T18:23:25.369531736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:25.370163 containerd[1886]: time="2025-06-20T18:23:25.370042120Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.673822552s" Jun 20 18:23:25.370163 containerd[1886]: time="2025-06-20T18:23:25.370066384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jun 20 18:23:25.370637 containerd[1886]: time="2025-06-20T18:23:25.370559464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 18:23:26.299407 chronyd[1865]: Selected source PHC0 Jun 20 18:23:27.649110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013322999.mount: Deactivated successfully. Jun 20 18:23:28.393166 containerd[1886]: time="2025-06-20T18:23:28.393104513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:28.398322 containerd[1886]: time="2025-06-20T18:23:28.398142737Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378406" Jun 20 18:23:28.440576 containerd[1886]: time="2025-06-20T18:23:28.440547017Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:28.444089 containerd[1886]: time="2025-06-20T18:23:28.444038185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:28.444423 containerd[1886]: time="2025-06-20T18:23:28.444305089Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 3.073724321s" Jun 20 18:23:28.444423 containerd[1886]: time="2025-06-20T18:23:28.444334305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jun 20 18:23:28.445050 containerd[1886]: time="2025-06-20T18:23:28.445033681Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 18:23:30.954519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161565885.mount: Deactivated successfully. Jun 20 18:23:34.463362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:23:34.464675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:34.561752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:34.564090 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:34.594016 kubelet[2664]: E0620 18:23:34.593923 2664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:34.596138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:34.596334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:34.596853 systemd[1]: kubelet.service: Consumed 104ms CPU time, 106.9M memory peak. Jun 20 18:23:44.713507 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 18:23:44.715225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:44.805853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:44.808129 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:23:44.901843 kubelet[2687]: E0620 18:23:44.901776 2687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:23:44.903979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:23:44.904220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:23:44.904760 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.8M memory peak. Jun 20 18:23:44.979609 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 20 18:23:46.779036 containerd[1886]: time="2025-06-20T18:23:46.778393784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:46.780661 containerd[1886]: time="2025-06-20T18:23:46.780639853Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jun 20 18:23:46.784025 containerd[1886]: time="2025-06-20T18:23:46.783990705Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:46.787447 containerd[1886]: time="2025-06-20T18:23:46.787418567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:46.788139 containerd[1886]: time="2025-06-20T18:23:46.788114642Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 18.343002368s" Jun 20 18:23:46.788139 containerd[1886]: time="2025-06-20T18:23:46.788140698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jun 20 18:23:46.788565 containerd[1886]: time="2025-06-20T18:23:46.788517541Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:23:47.572993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097462954.mount: Deactivated successfully. Jun 20 18:23:47.596031 containerd[1886]: time="2025-06-20T18:23:47.595863161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:47.598657 containerd[1886]: time="2025-06-20T18:23:47.598634989Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 18:23:47.602454 containerd[1886]: time="2025-06-20T18:23:47.602419029Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:47.607190 containerd[1886]: time="2025-06-20T18:23:47.607138710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:23:47.609071 containerd[1886]: time="2025-06-20T18:23:47.608792955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 820.252726ms" Jun 20 18:23:47.609071 containerd[1886]: time="2025-06-20T18:23:47.608818812Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 18:23:47.610426 containerd[1886]: time="2025-06-20T18:23:47.610401535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 18:23:48.246214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239821913.mount: Deactivated successfully. Jun 20 18:23:48.516567 update_engine[1875]: I20250620 18:23:48.516330 1875 update_attempter.cc:509] Updating boot flags... Jun 20 18:23:49.760801 containerd[1886]: time="2025-06-20T18:23:49.760114569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:49.762250 containerd[1886]: time="2025-06-20T18:23:49.762225355Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jun 20 18:23:49.769552 containerd[1886]: time="2025-06-20T18:23:49.769522779Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:49.773882 containerd[1886]: time="2025-06-20T18:23:49.773850009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:23:49.774655 containerd[1886]: time="2025-06-20T18:23:49.774630695Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.164199983s" Jun 20 18:23:49.774655 containerd[1886]: time="2025-06-20T18:23:49.774654959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jun 20 18:23:52.490446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:52.490982 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.8M memory peak. Jun 20 18:23:52.494168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:52.512421 systemd[1]: Reload requested from client PID 2883 ('systemctl') (unit session-9.scope)... Jun 20 18:23:52.512432 systemd[1]: Reloading... Jun 20 18:23:52.610036 zram_generator::config[2929]: No configuration found. Jun 20 18:23:52.673733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:23:52.757261 systemd[1]: Reloading finished in 244 ms. Jun 20 18:23:52.803073 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 18:23:52.803125 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 18:23:52.803299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:52.803336 systemd[1]: kubelet.service: Consumed 70ms CPU time, 95M memory peak. Jun 20 18:23:52.804518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:53.755146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:53.761223 (kubelet)[2996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:23:53.786297 kubelet[2996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:23:53.786297 kubelet[2996]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:23:53.786297 kubelet[2996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:23:53.786575 kubelet[2996]: I0620 18:23:53.786339 2996 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:23:54.314012 kubelet[2996]: I0620 18:23:54.313955 2996 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 18:23:54.314012 kubelet[2996]: I0620 18:23:54.313986 2996 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:23:54.314342 kubelet[2996]: I0620 18:23:54.314318 2996 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 18:23:54.328165 kubelet[2996]: E0620 18:23:54.328127 2996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:54.329190 kubelet[2996]: I0620 18:23:54.329172 2996 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:23:54.333748 kubelet[2996]: I0620 18:23:54.333721 2996 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:23:54.336259 kubelet[2996]: I0620 18:23:54.336239 2996 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:23:54.336711 kubelet[2996]: I0620 18:23:54.336682 2996 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:23:54.336832 kubelet[2996]: I0620 18:23:54.336712 2996 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-a1e4bb5c79","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:23:54.336913 kubelet[2996]: I0620 18:23:54.336840 2996 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:23:54.336913 kubelet[2996]: I0620 18:23:54.336846 2996 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 18:23:54.336982 kubelet[2996]: I0620 18:23:54.336968 2996 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:23:54.338529 kubelet[2996]: I0620 18:23:54.338509 2996 kubelet.go:446] "Attempting to sync node with API server" Jun 20 18:23:54.338565 kubelet[2996]: I0620 18:23:54.338530 2996 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:23:54.338630 kubelet[2996]: I0620 18:23:54.338617 2996 kubelet.go:352] "Adding apiserver pod source" Jun 20 18:23:54.338653 kubelet[2996]: I0620 18:23:54.338632 2996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:23:54.342910 kubelet[2996]: I0620 18:23:54.342517 2996 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:23:54.342910 kubelet[2996]: I0620 18:23:54.342811 2996 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:23:54.342910 kubelet[2996]: W0620 18:23:54.342852 2996 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:23:54.343330 kubelet[2996]: I0620 18:23:54.343311 2996 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:23:54.343365 kubelet[2996]: I0620 18:23:54.343341 2996 server.go:1287] "Started kubelet" Jun 20 18:23:54.343467 kubelet[2996]: W0620 18:23:54.343437 2996 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-a1e4bb5c79&limit=500&resourceVersion=0": dial tcp 10.200.20.48:6443: connect: connection refused Jun 20 18:23:54.343507 kubelet[2996]: E0620 18:23:54.343473 2996 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-a1e4bb5c79&limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:54.348321 kubelet[2996]: I0620 18:23:54.348289 2996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:23:54.350092 kubelet[2996]: E0620 18:23:54.349201 2996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.48:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.48:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-a1e4bb5c79.184ad364a14e81f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-a1e4bb5c79,UID:ci-4344.1.0-a-a1e4bb5c79,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-a1e4bb5c79,},FirstTimestamp:2025-06-20 18:23:54.343326192 +0000 UTC m=+0.579933923,LastTimestamp:2025-06-20 18:23:54.343326192 +0000 UTC m=+0.579933923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-a1e4bb5c79,}" Jun 20 18:23:54.350092 kubelet[2996]: W0620 18:23:54.349340 2996 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.48:6443: connect: connection refused Jun 20 18:23:54.350092 kubelet[2996]: E0620 18:23:54.349371 2996 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:54.350706 kubelet[2996]: I0620 18:23:54.350671 2996 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:23:54.352207 kubelet[2996]: I0620 18:23:54.351890 2996 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:23:54.352207 kubelet[2996]: I0620 18:23:54.351987 2996 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:23:54.352207 kubelet[2996]: E0620 18:23:54.352094 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:54.354160 kubelet[2996]: I0620 18:23:54.353623 2996 server.go:479] "Adding debug handlers to kubelet server" Jun 20 18:23:54.354289 kubelet[2996]: I0620 18:23:54.354239 2996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:23:54.354449 kubelet[2996]: I0620 18:23:54.354432 2996 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:23:54.355290 kubelet[2996]: E0620 18:23:54.354810 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-a1e4bb5c79?timeout=10s\": dial tcp 10.200.20.48:6443: connect: connection refused" interval="200ms" Jun 20 18:23:54.355290 kubelet[2996]: I0620 18:23:54.354875 2996 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:23:54.355290 kubelet[2996]: I0620 18:23:54.354913 2996 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:23:54.355290 kubelet[2996]: W0620 18:23:54.355129 2996 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.48:6443: connect: connection refused Jun 20 18:23:54.355290 kubelet[2996]: E0620 18:23:54.355157 2996 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:54.355959 kubelet[2996]: I0620 18:23:54.355938 2996 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:23:54.356047 kubelet[2996]: I0620 18:23:54.356028 2996 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:23:54.356939 kubelet[2996]: I0620 18:23:54.356915 2996 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:23:54.363510 kubelet[2996]: I0620 18:23:54.363403 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:23:54.364371 kubelet[2996]: I0620 18:23:54.364162 2996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:23:54.364371 kubelet[2996]: I0620 18:23:54.364181 2996 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 18:23:54.364371 kubelet[2996]: I0620 18:23:54.364196 2996 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:23:54.364371 kubelet[2996]: I0620 18:23:54.364201 2996 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 18:23:54.364371 kubelet[2996]: E0620 18:23:54.364232 2996 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:23:54.369244 kubelet[2996]: W0620 18:23:54.369213 2996 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.48:6443: connect: connection refused Jun 20 18:23:54.369362 kubelet[2996]: E0620 18:23:54.369347 2996 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:54.369531 kubelet[2996]: E0620 18:23:54.369516 2996 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:23:54.376563 kubelet[2996]: I0620 18:23:54.376541 2996 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:23:54.376696 kubelet[2996]: I0620 18:23:54.376555 2996 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:23:54.376696 kubelet[2996]: I0620 18:23:54.376626 2996 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:23:54.452849 kubelet[2996]: E0620 18:23:54.452809 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:54.465077 kubelet[2996]: E0620 18:23:54.465058 2996 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:23:54.553292 kubelet[2996]: E0620 18:23:54.553246 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:54.555859 kubelet[2996]: E0620 18:23:54.555832 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-a1e4bb5c79?timeout=10s\": dial tcp 10.200.20.48:6443: connect: connection refused" interval="400ms" Jun 20 18:23:54.653550 kubelet[2996]: I0620 18:23:54.653440 2996 policy_none.go:49] "None policy: Start" Jun 20 18:23:54.653550 kubelet[2996]: I0620 18:23:54.653478 2996 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:23:54.653550 kubelet[2996]: I0620 18:23:54.653495 2996 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:23:54.653899 kubelet[2996]: E0620 18:23:54.653756 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:54.661071 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:23:54.666104 kubelet[2996]: E0620 18:23:54.666080 2996 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:23:54.670278 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:23:54.672680 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:23:54.683575 kubelet[2996]: I0620 18:23:54.683558 2996 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:23:54.683839 kubelet[2996]: I0620 18:23:54.683826 2996 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:23:54.683928 kubelet[2996]: I0620 18:23:54.683898 2996 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:23:54.684191 kubelet[2996]: I0620 18:23:54.684173 2996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:23:54.685783 kubelet[2996]: E0620 18:23:54.685761 2996 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:23:54.686035 kubelet[2996]: E0620 18:23:54.685992 2996 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:54.785986 kubelet[2996]: I0620 18:23:54.785953 2996 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:54.786352 kubelet[2996]: E0620 18:23:54.786322 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.48:6443/api/v1/nodes\": dial tcp 10.200.20.48:6443: connect: connection refused" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:54.956969 kubelet[2996]: E0620 18:23:54.956846 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-a1e4bb5c79?timeout=10s\": dial tcp 10.200.20.48:6443: connect: connection refused" interval="800ms" Jun 20 18:23:54.988637 kubelet[2996]: I0620 18:23:54.988334 2996 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:54.988637 kubelet[2996]: E0620 18:23:54.988576 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.48:6443/api/v1/nodes\": dial tcp 10.200.20.48:6443: connect: connection refused" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.074714 systemd[1]: Created slice kubepods-burstable-pode1661e19f020cbcb53c0e2884c53e114.slice - libcontainer container kubepods-burstable-pode1661e19f020cbcb53c0e2884c53e114.slice. Jun 20 18:23:55.087086 kubelet[2996]: E0620 18:23:55.087011 2996 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.089462 systemd[1]: Created slice kubepods-burstable-pod6145d2ef241dcef37ef3f85f73f379c7.slice - libcontainer container kubepods-burstable-pod6145d2ef241dcef37ef3f85f73f379c7.slice. Jun 20 18:23:55.096858 kubelet[2996]: E0620 18:23:55.096841 2996 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.098871 systemd[1]: Created slice kubepods-burstable-pod187e5d5149c732ff2f10a4b55b485af0.slice - libcontainer container kubepods-burstable-pod187e5d5149c732ff2f10a4b55b485af0.slice. Jun 20 18:23:55.100472 kubelet[2996]: E0620 18:23:55.100455 2996 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160024 kubelet[2996]: I0620 18:23:55.159952 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1661e19f020cbcb53c0e2884c53e114-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"e1661e19f020cbcb53c0e2884c53e114\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160024 kubelet[2996]: I0620 18:23:55.159982 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1661e19f020cbcb53c0e2884c53e114-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"e1661e19f020cbcb53c0e2884c53e114\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160247 kubelet[2996]: I0620 18:23:55.159996 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160247 kubelet[2996]: I0620 18:23:55.160163 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/187e5d5149c732ff2f10a4b55b485af0-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"187e5d5149c732ff2f10a4b55b485af0\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160247 kubelet[2996]: I0620 18:23:55.160177 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1661e19f020cbcb53c0e2884c53e114-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"e1661e19f020cbcb53c0e2884c53e114\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160247 kubelet[2996]: I0620 18:23:55.160187 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160247 kubelet[2996]: I0620 18:23:55.160196 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160345 kubelet[2996]: I0620 18:23:55.160206 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.160345 kubelet[2996]: I0620 18:23:55.160215 2996 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.301037 kubelet[2996]: W0620 18:23:55.300968 2996 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.48:6443: connect: connection refused Jun 20 18:23:55.301168 kubelet[2996]: E0620 18:23:55.301046 2996 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:55.388077 containerd[1886]: time="2025-06-20T18:23:55.388037005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-a1e4bb5c79,Uid:e1661e19f020cbcb53c0e2884c53e114,Namespace:kube-system,Attempt:0,}" Jun 20 18:23:55.390445 kubelet[2996]: I0620 18:23:55.390416 2996 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.390835 kubelet[2996]: E0620 18:23:55.390808 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.48:6443/api/v1/nodes\": dial tcp 10.200.20.48:6443: connect: connection refused" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:55.398272 containerd[1886]: time="2025-06-20T18:23:55.398235749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79,Uid:6145d2ef241dcef37ef3f85f73f379c7,Namespace:kube-system,Attempt:0,}" Jun 20 18:23:55.402006 containerd[1886]: time="2025-06-20T18:23:55.401974045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-a1e4bb5c79,Uid:187e5d5149c732ff2f10a4b55b485af0,Namespace:kube-system,Attempt:0,}" Jun 20 18:23:55.477184 containerd[1886]: time="2025-06-20T18:23:55.477126544Z" level=info msg="connecting to shim 04a89c4569b37dfdaf31ed2565eb363f265b5183f55bc42fd1d2a5b7ecdcfcbc" address="unix:///run/containerd/s/842a41715eed403e6f8da442c613cc15e59f8e10a96df3f2f04a7394827ac65a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:23:55.497888 containerd[1886]: time="2025-06-20T18:23:55.497670340Z" level=info msg="connecting to shim e3af68924faa0ebac99fc91505057a8ac89b5d174f10a58458948e653e49d6ed" address="unix:///run/containerd/s/84ba442240efb163cf660f2aaca56811947cf3a10e2e39e379cb837a771de48f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:23:55.498146 systemd[1]: Started cri-containerd-04a89c4569b37dfdaf31ed2565eb363f265b5183f55bc42fd1d2a5b7ecdcfcbc.scope - libcontainer container 04a89c4569b37dfdaf31ed2565eb363f265b5183f55bc42fd1d2a5b7ecdcfcbc. Jun 20 18:23:55.511401 containerd[1886]: time="2025-06-20T18:23:55.511160657Z" level=info msg="connecting to shim c4878032f9980fad8b8249287dfb1be259f6ff2da25015601db1f3284b872c59" address="unix:///run/containerd/s/c4ef78ea88d70de0726dc206a90195e1ac2ee168c15996cfc92922e93bd18a59" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:23:55.530130 systemd[1]: Started cri-containerd-e3af68924faa0ebac99fc91505057a8ac89b5d174f10a58458948e653e49d6ed.scope - libcontainer container e3af68924faa0ebac99fc91505057a8ac89b5d174f10a58458948e653e49d6ed. Jun 20 18:23:55.533356 systemd[1]: Started cri-containerd-c4878032f9980fad8b8249287dfb1be259f6ff2da25015601db1f3284b872c59.scope - libcontainer container c4878032f9980fad8b8249287dfb1be259f6ff2da25015601db1f3284b872c59. Jun 20 18:23:55.551543 containerd[1886]: time="2025-06-20T18:23:55.551333977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-a1e4bb5c79,Uid:e1661e19f020cbcb53c0e2884c53e114,Namespace:kube-system,Attempt:0,} returns sandbox id \"04a89c4569b37dfdaf31ed2565eb363f265b5183f55bc42fd1d2a5b7ecdcfcbc\"" Jun 20 18:23:55.556649 containerd[1886]: time="2025-06-20T18:23:55.556457814Z" level=info msg="CreateContainer within sandbox \"04a89c4569b37dfdaf31ed2565eb363f265b5183f55bc42fd1d2a5b7ecdcfcbc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:23:55.575988 kubelet[2996]: W0620 18:23:55.575944 2996 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-a1e4bb5c79&limit=500&resourceVersion=0": dial tcp 10.200.20.48:6443: connect: connection refused Jun 20 18:23:55.575988 kubelet[2996]: E0620 18:23:55.575999 2996 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-a1e4bb5c79&limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:55.645563 containerd[1886]: time="2025-06-20T18:23:55.645525289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79,Uid:6145d2ef241dcef37ef3f85f73f379c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3af68924faa0ebac99fc91505057a8ac89b5d174f10a58458948e653e49d6ed\"" Jun 20 18:23:55.648023 containerd[1886]: time="2025-06-20T18:23:55.647981976Z" level=info msg="CreateContainer within sandbox \"e3af68924faa0ebac99fc91505057a8ac89b5d174f10a58458948e653e49d6ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:23:55.690945 containerd[1886]: time="2025-06-20T18:23:55.690895871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-a1e4bb5c79,Uid:187e5d5149c732ff2f10a4b55b485af0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4878032f9980fad8b8249287dfb1be259f6ff2da25015601db1f3284b872c59\"" Jun 20 18:23:55.693119 containerd[1886]: time="2025-06-20T18:23:55.693094744Z" level=info msg="CreateContainer within sandbox \"c4878032f9980fad8b8249287dfb1be259f6ff2da25015601db1f3284b872c59\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:23:55.712183 kubelet[2996]: W0620 18:23:55.712095 2996 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.48:6443: connect: connection refused Jun 20 18:23:55.712183 kubelet[2996]: E0620 18:23:55.712159 2996 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:55.758064 kubelet[2996]: E0620 18:23:55.757992 2996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-a1e4bb5c79?timeout=10s\": dial tcp 10.200.20.48:6443: connect: connection refused" interval="1.6s" Jun 20 18:23:55.774635 kubelet[2996]: W0620 18:23:55.774590 2996 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.48:6443: connect: connection refused Jun 20 18:23:55.774719 kubelet[2996]: E0620 18:23:55.774645 2996 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:56.193214 kubelet[2996]: I0620 18:23:56.193146 2996 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:56.193686 kubelet[2996]: E0620 18:23:56.193651 2996 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.48:6443/api/v1/nodes\": dial tcp 10.200.20.48:6443: connect: connection refused" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:56.199168 containerd[1886]: time="2025-06-20T18:23:56.199078927Z" level=info msg="Container 95eeae51b71c8e5b563aa1ab5ae418bec59b567d5f337ead194ed903701a367a: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:23:56.444943 kubelet[2996]: E0620 18:23:56.444835 2996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.48:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:23:56.498697 containerd[1886]: time="2025-06-20T18:23:56.498204914Z" level=info msg="Container b75f8e9d6dc26cb10abe19a5bafcad16c4abf51946bacf58bd762c9a6553de29: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:23:56.594430 containerd[1886]: time="2025-06-20T18:23:56.593381081Z" level=info msg="CreateContainer within sandbox \"04a89c4569b37dfdaf31ed2565eb363f265b5183f55bc42fd1d2a5b7ecdcfcbc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95eeae51b71c8e5b563aa1ab5ae418bec59b567d5f337ead194ed903701a367a\"" Jun 20 18:23:56.595242 containerd[1886]: time="2025-06-20T18:23:56.594954602Z" level=info msg="Container 21eed541b63707ef5af2ee53b2ce2e0c8e2f6800bd4990dd7da9f83101b2a9db: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:23:56.595592 containerd[1886]: time="2025-06-20T18:23:56.595555377Z" level=info msg="StartContainer for \"95eeae51b71c8e5b563aa1ab5ae418bec59b567d5f337ead194ed903701a367a\"" Jun 20 18:23:56.598607 containerd[1886]: time="2025-06-20T18:23:56.598510086Z" level=info msg="connecting to shim 95eeae51b71c8e5b563aa1ab5ae418bec59b567d5f337ead194ed903701a367a" address="unix:///run/containerd/s/842a41715eed403e6f8da442c613cc15e59f8e10a96df3f2f04a7394827ac65a" protocol=ttrpc version=3 Jun 20 18:23:56.616121 systemd[1]: Started cri-containerd-95eeae51b71c8e5b563aa1ab5ae418bec59b567d5f337ead194ed903701a367a.scope - libcontainer container 95eeae51b71c8e5b563aa1ab5ae418bec59b567d5f337ead194ed903701a367a. Jun 20 18:23:56.688297 containerd[1886]: time="2025-06-20T18:23:56.688231449Z" level=info msg="StartContainer for \"95eeae51b71c8e5b563aa1ab5ae418bec59b567d5f337ead194ed903701a367a\" returns successfully" Jun 20 18:23:57.146622 containerd[1886]: time="2025-06-20T18:23:57.146541645Z" level=info msg="CreateContainer within sandbox \"c4878032f9980fad8b8249287dfb1be259f6ff2da25015601db1f3284b872c59\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b75f8e9d6dc26cb10abe19a5bafcad16c4abf51946bacf58bd762c9a6553de29\"" Jun 20 18:23:57.147268 containerd[1886]: time="2025-06-20T18:23:57.147243647Z" level=info msg="StartContainer for \"b75f8e9d6dc26cb10abe19a5bafcad16c4abf51946bacf58bd762c9a6553de29\"" Jun 20 18:23:57.150317 containerd[1886]: time="2025-06-20T18:23:57.150279286Z" level=info msg="connecting to shim b75f8e9d6dc26cb10abe19a5bafcad16c4abf51946bacf58bd762c9a6553de29" address="unix:///run/containerd/s/c4ef78ea88d70de0726dc206a90195e1ac2ee168c15996cfc92922e93bd18a59" protocol=ttrpc version=3 Jun 20 18:23:57.172123 systemd[1]: Started cri-containerd-b75f8e9d6dc26cb10abe19a5bafcad16c4abf51946bacf58bd762c9a6553de29.scope - libcontainer container b75f8e9d6dc26cb10abe19a5bafcad16c4abf51946bacf58bd762c9a6553de29. Jun 20 18:23:57.196844 containerd[1886]: time="2025-06-20T18:23:57.196804195Z" level=info msg="CreateContainer within sandbox \"e3af68924faa0ebac99fc91505057a8ac89b5d174f10a58458948e653e49d6ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"21eed541b63707ef5af2ee53b2ce2e0c8e2f6800bd4990dd7da9f83101b2a9db\"" Jun 20 18:23:57.197857 containerd[1886]: time="2025-06-20T18:23:57.197814741Z" level=info msg="StartContainer for \"21eed541b63707ef5af2ee53b2ce2e0c8e2f6800bd4990dd7da9f83101b2a9db\"" Jun 20 18:23:57.198634 containerd[1886]: time="2025-06-20T18:23:57.198570216Z" level=info msg="connecting to shim 21eed541b63707ef5af2ee53b2ce2e0c8e2f6800bd4990dd7da9f83101b2a9db" address="unix:///run/containerd/s/84ba442240efb163cf660f2aaca56811947cf3a10e2e39e379cb837a771de48f" protocol=ttrpc version=3 Jun 20 18:23:57.223645 containerd[1886]: time="2025-06-20T18:23:57.223320777Z" level=info msg="StartContainer for \"b75f8e9d6dc26cb10abe19a5bafcad16c4abf51946bacf58bd762c9a6553de29\" returns successfully" Jun 20 18:23:57.225345 systemd[1]: Started cri-containerd-21eed541b63707ef5af2ee53b2ce2e0c8e2f6800bd4990dd7da9f83101b2a9db.scope - libcontainer container 21eed541b63707ef5af2ee53b2ce2e0c8e2f6800bd4990dd7da9f83101b2a9db. Jun 20 18:23:57.282303 containerd[1886]: time="2025-06-20T18:23:57.282191654Z" level=info msg="StartContainer for \"21eed541b63707ef5af2ee53b2ce2e0c8e2f6800bd4990dd7da9f83101b2a9db\" returns successfully" Jun 20 18:23:57.383663 kubelet[2996]: E0620 18:23:57.383456 2996 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:57.387023 kubelet[2996]: E0620 18:23:57.386947 2996 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:57.390235 kubelet[2996]: E0620 18:23:57.390218 2996 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:57.536630 kubelet[2996]: E0620 18:23:57.536586 2996 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-a1e4bb5c79\" not found" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:57.796080 kubelet[2996]: I0620 18:23:57.795801 2996 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:57.833117 kubelet[2996]: I0620 18:23:57.833076 2996 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:57.833117 kubelet[2996]: E0620 18:23:57.833108 2996 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.0-a-a1e4bb5c79\": node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:57.928999 kubelet[2996]: E0620 18:23:57.928960 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:58.029303 kubelet[2996]: E0620 18:23:58.029258 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:58.129641 kubelet[2996]: E0620 18:23:58.129520 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:58.230675 kubelet[2996]: E0620 18:23:58.230636 2996 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:23:58.253083 kubelet[2996]: I0620 18:23:58.253052 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.257192 kubelet[2996]: E0620 18:23:58.257166 2996 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-a1e4bb5c79\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.257245 kubelet[2996]: I0620 18:23:58.257198 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.259009 kubelet[2996]: E0620 18:23:58.258425 2996 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.259009 kubelet[2996]: I0620 18:23:58.258445 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.259801 kubelet[2996]: E0620 18:23:58.259776 2996 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.347130 kubelet[2996]: I0620 18:23:58.347100 2996 apiserver.go:52] "Watching apiserver" Jun 20 18:23:58.355810 kubelet[2996]: I0620 18:23:58.355786 2996 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:23:58.390037 kubelet[2996]: I0620 18:23:58.387828 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.390037 kubelet[2996]: I0620 18:23:58.388185 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.390037 kubelet[2996]: I0620 18:23:58.388689 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.390037 kubelet[2996]: E0620 18:23:58.389677 2996 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-a1e4bb5c79\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.390361 kubelet[2996]: E0620 18:23:58.390088 2996 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:58.390881 kubelet[2996]: E0620 18:23:58.390847 2996 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:59.389120 kubelet[2996]: I0620 18:23:59.389087 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:59.389751 kubelet[2996]: I0620 18:23:59.389411 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:59.389751 kubelet[2996]: I0620 18:23:59.389621 2996 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:23:59.397637 kubelet[2996]: W0620 18:23:59.397614 2996 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:23:59.401583 kubelet[2996]: W0620 18:23:59.401534 2996 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:23:59.401583 kubelet[2996]: W0620 18:23:59.401569 2996 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:23:59.517040 systemd[1]: Reload requested from client PID 3268 ('systemctl') (unit session-9.scope)... Jun 20 18:23:59.517055 systemd[1]: Reloading... Jun 20 18:23:59.596203 zram_generator::config[3314]: No configuration found. Jun 20 18:23:59.664595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:23:59.755513 systemd[1]: Reloading finished in 238 ms. Jun 20 18:23:59.771284 kubelet[2996]: I0620 18:23:59.771224 2996 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:23:59.771302 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:59.787722 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:23:59.787929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:59.787981 systemd[1]: kubelet.service: Consumed 827ms CPU time, 124.9M memory peak. Jun 20 18:23:59.789494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:23:59.888105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:23:59.893281 (kubelet)[3378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:23:59.927064 kubelet[3378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:23:59.927064 kubelet[3378]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:23:59.927064 kubelet[3378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:23:59.928131 kubelet[3378]: I0620 18:23:59.927798 3378 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:23:59.932097 kubelet[3378]: I0620 18:23:59.932065 3378 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 18:23:59.932097 kubelet[3378]: I0620 18:23:59.932090 3378 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:23:59.932355 kubelet[3378]: I0620 18:23:59.932336 3378 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 18:23:59.933273 kubelet[3378]: I0620 18:23:59.933253 3378 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 18:24:00.139026 kubelet[3378]: I0620 18:24:00.138958 3378 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:24:00.143645 kubelet[3378]: I0620 18:24:00.143604 3378 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:24:00.148618 kubelet[3378]: I0620 18:24:00.146987 3378 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:24:00.148618 kubelet[3378]: I0620 18:24:00.147156 3378 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:24:00.148618 kubelet[3378]: I0620 18:24:00.147182 3378 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-a1e4bb5c79","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:24:00.148618 kubelet[3378]: I0620 18:24:00.147357 3378 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:24:00.148782 kubelet[3378]: I0620 18:24:00.147363 3378 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 18:24:00.148782 kubelet[3378]: I0620 18:24:00.147404 3378 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:24:00.148782 kubelet[3378]: I0620 18:24:00.147495 3378 kubelet.go:446] "Attempting to sync node with API server" Jun 20 18:24:00.148782 kubelet[3378]: I0620 18:24:00.147503 3378 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:24:00.148782 kubelet[3378]: I0620 18:24:00.147523 3378 kubelet.go:352] "Adding apiserver pod source" Jun 20 18:24:00.148782 kubelet[3378]: I0620 18:24:00.147531 3378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:24:00.149364 kubelet[3378]: I0620 18:24:00.149342 3378 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:24:00.149769 kubelet[3378]: I0620 18:24:00.149753 3378 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:24:00.153012 kubelet[3378]: I0620 18:24:00.152846 3378 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:24:00.153171 kubelet[3378]: I0620 18:24:00.153127 3378 server.go:1287] "Started kubelet" Jun 20 18:24:00.154084 kubelet[3378]: I0620 18:24:00.153750 3378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:24:00.154084 kubelet[3378]: I0620 18:24:00.153957 3378 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:24:00.154084 kubelet[3378]: I0620 18:24:00.153997 3378 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:24:00.154816 kubelet[3378]: I0620 18:24:00.154641 3378 server.go:479] "Adding debug handlers to kubelet server" Jun 20 18:24:00.157008 kubelet[3378]: I0620 18:24:00.156985 3378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:24:00.160018 kubelet[3378]: E0620 18:24:00.159174 3378 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:24:00.160018 kubelet[3378]: I0620 18:24:00.159349 3378 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:24:00.161416 kubelet[3378]: E0620 18:24:00.161310 3378 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-a1e4bb5c79\" not found" Jun 20 18:24:00.161416 kubelet[3378]: I0620 18:24:00.161353 3378 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:24:00.161515 kubelet[3378]: I0620 18:24:00.161488 3378 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:24:00.161710 kubelet[3378]: I0620 18:24:00.161619 3378 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:24:00.162186 kubelet[3378]: I0620 18:24:00.162085 3378 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:24:00.162186 kubelet[3378]: I0620 18:24:00.162159 3378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:24:00.162956 kubelet[3378]: I0620 18:24:00.162938 3378 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:24:00.170106 kubelet[3378]: I0620 18:24:00.169627 3378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:24:00.170857 kubelet[3378]: I0620 18:24:00.170830 3378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:24:00.170857 kubelet[3378]: I0620 18:24:00.170849 3378 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 18:24:00.170929 kubelet[3378]: I0620 18:24:00.170863 3378 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:24:00.170929 kubelet[3378]: I0620 18:24:00.170869 3378 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 18:24:00.170929 kubelet[3378]: E0620 18:24:00.170898 3378 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:24:00.212734 kubelet[3378]: I0620 18:24:00.212595 3378 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:24:00.212734 kubelet[3378]: I0620 18:24:00.212618 3378 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:24:00.212734 kubelet[3378]: I0620 18:24:00.212637 3378 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:24:00.213378 kubelet[3378]: I0620 18:24:00.213355 3378 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:24:00.213423 kubelet[3378]: I0620 18:24:00.213374 3378 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:24:00.213423 kubelet[3378]: I0620 18:24:00.213390 3378 policy_none.go:49] "None policy: Start" Jun 20 18:24:00.213423 kubelet[3378]: I0620 18:24:00.213398 3378 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:24:00.213423 kubelet[3378]: I0620 18:24:00.213409 3378 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:24:00.213504 kubelet[3378]: I0620 18:24:00.213491 3378 state_mem.go:75] "Updated machine memory state" Jun 20 18:24:00.216898 kubelet[3378]: I0620 18:24:00.216667 3378 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:24:00.217261 kubelet[3378]: I0620 18:24:00.217246 3378 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:24:00.217393 kubelet[3378]: I0620 18:24:00.217260 3378 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:24:00.217429 kubelet[3378]: I0620 18:24:00.217398 3378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:24:00.218671 kubelet[3378]: E0620 18:24:00.218621 3378 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:24:00.271850 kubelet[3378]: I0620 18:24:00.271777 3378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.272045 kubelet[3378]: I0620 18:24:00.271778 3378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.272251 kubelet[3378]: I0620 18:24:00.272214 3378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.284869 kubelet[3378]: W0620 18:24:00.284749 3378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:24:00.285258 kubelet[3378]: E0620 18:24:00.284998 3378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.286022 kubelet[3378]: W0620 18:24:00.285912 3378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:24:00.286022 kubelet[3378]: E0620 18:24:00.285967 3378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-a1e4bb5c79\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.286571 kubelet[3378]: W0620 18:24:00.286550 3378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:24:00.286571 kubelet[3378]: E0620 18:24:00.286611 3378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.320538 kubelet[3378]: I0620 18:24:00.320479 3378 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.329055 kubelet[3378]: I0620 18:24:00.329032 3378 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.329139 kubelet[3378]: I0620 18:24:00.329092 3378 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.362929 kubelet[3378]: I0620 18:24:00.362904 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1661e19f020cbcb53c0e2884c53e114-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"e1661e19f020cbcb53c0e2884c53e114\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.363103 kubelet[3378]: I0620 18:24:00.363052 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.363103 kubelet[3378]: I0620 18:24:00.363073 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.363103 kubelet[3378]: I0620 18:24:00.363083 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.363260 kubelet[3378]: I0620 18:24:00.363093 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.363260 kubelet[3378]: I0620 18:24:00.363213 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/187e5d5149c732ff2f10a4b55b485af0-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"187e5d5149c732ff2f10a4b55b485af0\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.363260 kubelet[3378]: I0620 18:24:00.363227 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1661e19f020cbcb53c0e2884c53e114-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"e1661e19f020cbcb53c0e2884c53e114\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.363260 kubelet[3378]: I0620 18:24:00.363236 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1661e19f020cbcb53c0e2884c53e114-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"e1661e19f020cbcb53c0e2884c53e114\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:00.363260 kubelet[3378]: I0620 18:24:00.363246 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6145d2ef241dcef37ef3f85f73f379c7-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79\" (UID: \"6145d2ef241dcef37ef3f85f73f379c7\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:01.150865 kubelet[3378]: I0620 18:24:01.150823 3378 apiserver.go:52] "Watching apiserver" Jun 20 18:24:01.162214 kubelet[3378]: I0620 18:24:01.162182 3378 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:24:01.198818 kubelet[3378]: I0620 18:24:01.198652 3378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:01.203379 kubelet[3378]: W0620 18:24:01.203350 3378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:24:01.203465 kubelet[3378]: E0620 18:24:01.203396 3378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-a1e4bb5c79\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" Jun 20 18:24:01.222864 kubelet[3378]: I0620 18:24:01.222683 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-a1e4bb5c79" podStartSLOduration=2.222668735 podStartE2EDuration="2.222668735s" podCreationTimestamp="2025-06-20 18:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:01.213989493 +0000 UTC m=+1.317679103" watchObservedRunningTime="2025-06-20 18:24:01.222668735 +0000 UTC m=+1.326358345" Jun 20 18:24:01.222864 kubelet[3378]: I0620 18:24:01.222767 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-a1e4bb5c79" podStartSLOduration=2.222764226 podStartE2EDuration="2.222764226s" podCreationTimestamp="2025-06-20 18:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:01.222658991 +0000 UTC m=+1.326348609" watchObservedRunningTime="2025-06-20 18:24:01.222764226 +0000 UTC m=+1.326453836" Jun 20 18:24:01.240087 kubelet[3378]: I0620 18:24:01.240038 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-a1e4bb5c79" podStartSLOduration=2.240025755 podStartE2EDuration="2.240025755s" podCreationTimestamp="2025-06-20 18:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:01.231237438 +0000 UTC m=+1.334927048" watchObservedRunningTime="2025-06-20 18:24:01.240025755 +0000 UTC m=+1.343715365" Jun 20 18:24:04.847802 kubelet[3378]: I0620 18:24:04.847716 3378 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:24:04.848713 containerd[1886]: time="2025-06-20T18:24:04.848644771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:24:04.849199 kubelet[3378]: I0620 18:24:04.848907 3378 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:24:05.411842 sudo[3410]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:24:05.412118 sudo[3410]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:24:05.714696 systemd[1]: Created slice kubepods-besteffort-pod2f3e4d20_8d27_4092_bc19_3c2da807ef01.slice - libcontainer container kubepods-besteffort-pod2f3e4d20_8d27_4092_bc19_3c2da807ef01.slice. Jun 20 18:24:05.787447 sudo[3410]: pam_unix(sudo:session): session closed for user root Jun 20 18:24:05.812329 kubelet[3378]: I0620 18:24:05.789421 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f3e4d20-8d27-4092-bc19-3c2da807ef01-lib-modules\") pod \"kube-proxy-pt657\" (UID: \"2f3e4d20-8d27-4092-bc19-3c2da807ef01\") " pod="kube-system/kube-proxy-pt657" Jun 20 18:24:05.812329 kubelet[3378]: I0620 18:24:05.789481 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4crk\" (UniqueName: \"kubernetes.io/projected/2f3e4d20-8d27-4092-bc19-3c2da807ef01-kube-api-access-x4crk\") pod \"kube-proxy-pt657\" (UID: \"2f3e4d20-8d27-4092-bc19-3c2da807ef01\") " pod="kube-system/kube-proxy-pt657" Jun 20 18:24:05.812329 kubelet[3378]: I0620 18:24:05.789500 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f3e4d20-8d27-4092-bc19-3c2da807ef01-kube-proxy\") pod \"kube-proxy-pt657\" (UID: \"2f3e4d20-8d27-4092-bc19-3c2da807ef01\") " pod="kube-system/kube-proxy-pt657" Jun 20 18:24:05.812329 kubelet[3378]: I0620 18:24:05.789510 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f3e4d20-8d27-4092-bc19-3c2da807ef01-xtables-lock\") pod \"kube-proxy-pt657\" (UID: \"2f3e4d20-8d27-4092-bc19-3c2da807ef01\") " pod="kube-system/kube-proxy-pt657" Jun 20 18:24:06.045030 containerd[1886]: time="2025-06-20T18:24:06.044832241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pt657,Uid:2f3e4d20-8d27-4092-bc19-3c2da807ef01,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:06.156043 systemd[1]: Created slice kubepods-burstable-pod651ded85_ffca_47b5_8312_40de593c296f.slice - libcontainer container kubepods-burstable-pod651ded85_ffca_47b5_8312_40de593c296f.slice. Jun 20 18:24:06.166892 systemd[1]: Created slice kubepods-besteffort-pod6242d19c_c967_435a_b3ba_42f41b4c0e7c.slice - libcontainer container kubepods-besteffort-pod6242d19c_c967_435a_b3ba_42f41b4c0e7c.slice. Jun 20 18:24:06.191246 kubelet[3378]: I0620 18:24:06.191222 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-etc-cni-netd\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192613 kubelet[3378]: I0620 18:24:06.191610 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-host-proc-sys-net\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192613 kubelet[3378]: I0620 18:24:06.191634 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cilium-run\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192613 kubelet[3378]: I0620 18:24:06.191648 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbbp9\" (UniqueName: \"kubernetes.io/projected/651ded85-ffca-47b5-8312-40de593c296f-kube-api-access-pbbp9\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192613 kubelet[3378]: I0620 18:24:06.191663 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6242d19c-c967-435a-b3ba-42f41b4c0e7c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qmhkx\" (UID: \"6242d19c-c967-435a-b3ba-42f41b4c0e7c\") " pod="kube-system/cilium-operator-6c4d7847fc-qmhkx" Jun 20 18:24:06.192613 kubelet[3378]: I0620 18:24:06.191674 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-xtables-lock\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192707 kubelet[3378]: I0620 18:24:06.191684 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-host-proc-sys-kernel\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192707 kubelet[3378]: I0620 18:24:06.191694 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mxs\" (UniqueName: \"kubernetes.io/projected/6242d19c-c967-435a-b3ba-42f41b4c0e7c-kube-api-access-z5mxs\") pod \"cilium-operator-6c4d7847fc-qmhkx\" (UID: \"6242d19c-c967-435a-b3ba-42f41b4c0e7c\") " pod="kube-system/cilium-operator-6c4d7847fc-qmhkx" Jun 20 18:24:06.192707 kubelet[3378]: I0620 18:24:06.191703 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-bpf-maps\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192707 kubelet[3378]: I0620 18:24:06.191715 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-lib-modules\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192707 kubelet[3378]: I0620 18:24:06.191725 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/651ded85-ffca-47b5-8312-40de593c296f-clustermesh-secrets\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192775 kubelet[3378]: I0620 18:24:06.191733 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651ded85-ffca-47b5-8312-40de593c296f-cilium-config-path\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192775 kubelet[3378]: I0620 18:24:06.191741 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/651ded85-ffca-47b5-8312-40de593c296f-hubble-tls\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192775 kubelet[3378]: I0620 18:24:06.191751 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cilium-cgroup\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192775 kubelet[3378]: I0620 18:24:06.191766 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-hostproc\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.192775 kubelet[3378]: I0620 18:24:06.191777 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cni-path\") pod \"cilium-bnr6q\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " pod="kube-system/cilium-bnr6q" Jun 20 18:24:06.461934 containerd[1886]: time="2025-06-20T18:24:06.461801526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bnr6q,Uid:651ded85-ffca-47b5-8312-40de593c296f,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:06.471427 containerd[1886]: time="2025-06-20T18:24:06.471377852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qmhkx,Uid:6242d19c-c967-435a-b3ba-42f41b4c0e7c,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:12.100360 containerd[1886]: time="2025-06-20T18:24:12.100281753Z" level=info msg="connecting to shim 52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d" address="unix:///run/containerd/s/616863a89d9ec749c7cf41d0b44f267a3ddd7692bf826e9e656630de9645cefc" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:12.121486 containerd[1886]: time="2025-06-20T18:24:12.121426287Z" level=info msg="connecting to shim b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41" address="unix:///run/containerd/s/e8cef6741c1fb12f91171844d72dee3802ce4f1e99f54c86690bb01dc46847cb" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:12.128881 containerd[1886]: time="2025-06-20T18:24:12.128737779Z" level=info msg="connecting to shim 8b933cdd09e93020abbf3c4c1dc4dbdf592e70397f8163eaa694a7a96a9147aa" address="unix:///run/containerd/s/26163462b96d537b6dd9512fa9b0cae69e7bcf41160e59e866c3190917197685" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:12.129294 systemd[1]: Started cri-containerd-52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d.scope - libcontainer container 52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d. Jun 20 18:24:12.154178 systemd[1]: Started cri-containerd-b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41.scope - libcontainer container b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41. Jun 20 18:24:12.157900 systemd[1]: Started cri-containerd-8b933cdd09e93020abbf3c4c1dc4dbdf592e70397f8163eaa694a7a96a9147aa.scope - libcontainer container 8b933cdd09e93020abbf3c4c1dc4dbdf592e70397f8163eaa694a7a96a9147aa. Jun 20 18:24:12.163405 containerd[1886]: time="2025-06-20T18:24:12.163362504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bnr6q,Uid:651ded85-ffca-47b5-8312-40de593c296f,Namespace:kube-system,Attempt:0,} returns sandbox id \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\"" Jun 20 18:24:12.168307 containerd[1886]: time="2025-06-20T18:24:12.168266855Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:24:12.191417 containerd[1886]: time="2025-06-20T18:24:12.191381349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pt657,Uid:2f3e4d20-8d27-4092-bc19-3c2da807ef01,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b933cdd09e93020abbf3c4c1dc4dbdf592e70397f8163eaa694a7a96a9147aa\"" Jun 20 18:24:12.196884 containerd[1886]: time="2025-06-20T18:24:12.196767794Z" level=info msg="CreateContainer within sandbox \"8b933cdd09e93020abbf3c4c1dc4dbdf592e70397f8163eaa694a7a96a9147aa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:24:12.199027 containerd[1886]: time="2025-06-20T18:24:12.198536477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qmhkx,Uid:6242d19c-c967-435a-b3ba-42f41b4c0e7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\"" Jun 20 18:24:12.223193 containerd[1886]: time="2025-06-20T18:24:12.223160528Z" level=info msg="Container b7abd03458ba3d0dc2cc64514b0467361659d56ff61b726420a8c03e098934f1: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:12.240073 containerd[1886]: time="2025-06-20T18:24:12.240043346Z" level=info msg="CreateContainer within sandbox \"8b933cdd09e93020abbf3c4c1dc4dbdf592e70397f8163eaa694a7a96a9147aa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b7abd03458ba3d0dc2cc64514b0467361659d56ff61b726420a8c03e098934f1\"" Jun 20 18:24:12.241161 containerd[1886]: time="2025-06-20T18:24:12.241124609Z" level=info msg="StartContainer for \"b7abd03458ba3d0dc2cc64514b0467361659d56ff61b726420a8c03e098934f1\"" Jun 20 18:24:12.243136 containerd[1886]: time="2025-06-20T18:24:12.243082986Z" level=info msg="connecting to shim b7abd03458ba3d0dc2cc64514b0467361659d56ff61b726420a8c03e098934f1" address="unix:///run/containerd/s/26163462b96d537b6dd9512fa9b0cae69e7bcf41160e59e866c3190917197685" protocol=ttrpc version=3 Jun 20 18:24:12.260148 systemd[1]: Started cri-containerd-b7abd03458ba3d0dc2cc64514b0467361659d56ff61b726420a8c03e098934f1.scope - libcontainer container b7abd03458ba3d0dc2cc64514b0467361659d56ff61b726420a8c03e098934f1. Jun 20 18:24:12.290810 containerd[1886]: time="2025-06-20T18:24:12.290776850Z" level=info msg="StartContainer for \"b7abd03458ba3d0dc2cc64514b0467361659d56ff61b726420a8c03e098934f1\" returns successfully" Jun 20 18:24:13.656322 kubelet[3378]: I0620 18:24:13.656196 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pt657" podStartSLOduration=8.656180185 podStartE2EDuration="8.656180185s" podCreationTimestamp="2025-06-20 18:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:13.226159704 +0000 UTC m=+13.329849314" watchObservedRunningTime="2025-06-20 18:24:13.656180185 +0000 UTC m=+13.759869827" Jun 20 18:24:18.541863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4064362323.mount: Deactivated successfully. Jun 20 18:24:20.299472 containerd[1886]: time="2025-06-20T18:24:20.299426539Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:20.302679 containerd[1886]: time="2025-06-20T18:24:20.302656394Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jun 20 18:24:20.307789 containerd[1886]: time="2025-06-20T18:24:20.307763409Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:20.308884 containerd[1886]: time="2025-06-20T18:24:20.308466630Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.140160925s" Jun 20 18:24:20.308884 containerd[1886]: time="2025-06-20T18:24:20.308705645Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jun 20 18:24:20.309661 containerd[1886]: time="2025-06-20T18:24:20.309634896Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:24:20.311881 containerd[1886]: time="2025-06-20T18:24:20.311843617Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:24:20.336627 containerd[1886]: time="2025-06-20T18:24:20.336444597Z" level=info msg="Container 6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:20.348154 containerd[1886]: time="2025-06-20T18:24:20.348092684Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\"" Jun 20 18:24:20.348697 containerd[1886]: time="2025-06-20T18:24:20.348517201Z" level=info msg="StartContainer for \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\"" Jun 20 18:24:20.349307 containerd[1886]: time="2025-06-20T18:24:20.349282511Z" level=info msg="connecting to shim 6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a" address="unix:///run/containerd/s/616863a89d9ec749c7cf41d0b44f267a3ddd7692bf826e9e656630de9645cefc" protocol=ttrpc version=3 Jun 20 18:24:20.371129 systemd[1]: Started cri-containerd-6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a.scope - libcontainer container 6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a. Jun 20 18:24:20.392755 containerd[1886]: time="2025-06-20T18:24:20.392694901Z" level=info msg="StartContainer for \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\" returns successfully" Jun 20 18:24:20.397913 systemd[1]: cri-containerd-6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a.scope: Deactivated successfully. Jun 20 18:24:20.400230 containerd[1886]: time="2025-06-20T18:24:20.399085993Z" level=info msg="received exit event container_id:\"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\" id:\"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\" pid:3766 exited_at:{seconds:1750443860 nanos:398589547}" Jun 20 18:24:20.400230 containerd[1886]: time="2025-06-20T18:24:20.399164075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\" id:\"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\" pid:3766 exited_at:{seconds:1750443860 nanos:398589547}" Jun 20 18:24:21.334541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a-rootfs.mount: Deactivated successfully. Jun 20 18:24:22.236779 containerd[1886]: time="2025-06-20T18:24:22.236735815Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:24:22.257741 containerd[1886]: time="2025-06-20T18:24:22.257303636Z" level=info msg="Container b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:22.275573 containerd[1886]: time="2025-06-20T18:24:22.275491651Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\"" Jun 20 18:24:22.276911 containerd[1886]: time="2025-06-20T18:24:22.276842163Z" level=info msg="StartContainer for \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\"" Jun 20 18:24:22.278285 containerd[1886]: time="2025-06-20T18:24:22.278263749Z" level=info msg="connecting to shim b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893" address="unix:///run/containerd/s/616863a89d9ec749c7cf41d0b44f267a3ddd7692bf826e9e656630de9645cefc" protocol=ttrpc version=3 Jun 20 18:24:22.297118 systemd[1]: Started cri-containerd-b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893.scope - libcontainer container b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893. Jun 20 18:24:22.322421 containerd[1886]: time="2025-06-20T18:24:22.322386368Z" level=info msg="StartContainer for \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\" returns successfully" Jun 20 18:24:22.331339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:24:22.331500 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:24:22.331730 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:24:22.333206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:24:22.335682 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:24:22.338124 systemd[1]: cri-containerd-b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893.scope: Deactivated successfully. Jun 20 18:24:22.338875 containerd[1886]: time="2025-06-20T18:24:22.338845052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\" id:\"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\" pid:3812 exited_at:{seconds:1750443862 nanos:338558452}" Jun 20 18:24:22.338875 containerd[1886]: time="2025-06-20T18:24:22.338848900Z" level=info msg="received exit event container_id:\"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\" id:\"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\" pid:3812 exited_at:{seconds:1750443862 nanos:338558452}" Jun 20 18:24:22.358578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893-rootfs.mount: Deactivated successfully. Jun 20 18:24:22.361019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:24:23.101846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3897774699.mount: Deactivated successfully. Jun 20 18:24:23.240579 containerd[1886]: time="2025-06-20T18:24:23.240530894Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:24:23.311878 containerd[1886]: time="2025-06-20T18:24:23.311832702Z" level=info msg="Container 3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:23.333185 containerd[1886]: time="2025-06-20T18:24:23.332663890Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\"" Jun 20 18:24:23.335072 containerd[1886]: time="2025-06-20T18:24:23.335046280Z" level=info msg="StartContainer for \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\"" Jun 20 18:24:23.338913 containerd[1886]: time="2025-06-20T18:24:23.338879722Z" level=info msg="connecting to shim 3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3" address="unix:///run/containerd/s/616863a89d9ec749c7cf41d0b44f267a3ddd7692bf826e9e656630de9645cefc" protocol=ttrpc version=3 Jun 20 18:24:23.358150 systemd[1]: Started cri-containerd-3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3.scope - libcontainer container 3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3. Jun 20 18:24:23.392448 systemd[1]: cri-containerd-3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3.scope: Deactivated successfully. Jun 20 18:24:23.395474 containerd[1886]: time="2025-06-20T18:24:23.395445556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\" id:\"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\" pid:3867 exited_at:{seconds:1750443863 nanos:394531713}" Jun 20 18:24:23.397319 containerd[1886]: time="2025-06-20T18:24:23.397274531Z" level=info msg="received exit event container_id:\"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\" id:\"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\" pid:3867 exited_at:{seconds:1750443863 nanos:394531713}" Jun 20 18:24:23.398921 containerd[1886]: time="2025-06-20T18:24:23.398882611Z" level=info msg="StartContainer for \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\" returns successfully" Jun 20 18:24:23.417895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3-rootfs.mount: Deactivated successfully. Jun 20 18:24:23.669230 containerd[1886]: time="2025-06-20T18:24:23.669126382Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:23.672401 containerd[1886]: time="2025-06-20T18:24:23.672282124Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jun 20 18:24:23.676524 containerd[1886]: time="2025-06-20T18:24:23.676499905Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:24:23.677509 containerd[1886]: time="2025-06-20T18:24:23.677387164Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.367729251s" Jun 20 18:24:23.677509 containerd[1886]: time="2025-06-20T18:24:23.677410348Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jun 20 18:24:23.680160 containerd[1886]: time="2025-06-20T18:24:23.679997393Z" level=info msg="CreateContainer within sandbox \"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:24:23.696031 containerd[1886]: time="2025-06-20T18:24:23.695993965Z" level=info msg="Container 434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:23.711807 containerd[1886]: time="2025-06-20T18:24:23.711777650Z" level=info msg="CreateContainer within sandbox \"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\"" Jun 20 18:24:23.712421 containerd[1886]: time="2025-06-20T18:24:23.712395533Z" level=info msg="StartContainer for \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\"" Jun 20 18:24:23.713119 containerd[1886]: time="2025-06-20T18:24:23.713092761Z" level=info msg="connecting to shim 434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0" address="unix:///run/containerd/s/e8cef6741c1fb12f91171844d72dee3802ce4f1e99f54c86690bb01dc46847cb" protocol=ttrpc version=3 Jun 20 18:24:23.729123 systemd[1]: Started cri-containerd-434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0.scope - libcontainer container 434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0. Jun 20 18:24:23.753264 containerd[1886]: time="2025-06-20T18:24:23.753234859Z" level=info msg="StartContainer for \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" returns successfully" Jun 20 18:24:24.247598 containerd[1886]: time="2025-06-20T18:24:24.247216179Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:24:24.274613 containerd[1886]: time="2025-06-20T18:24:24.271543543Z" level=info msg="Container 2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:24.278209 kubelet[3378]: I0620 18:24:24.277559 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qmhkx" podStartSLOduration=6.7987431879999995 podStartE2EDuration="18.277545609s" podCreationTimestamp="2025-06-20 18:24:06 +0000 UTC" firstStartedPulling="2025-06-20 18:24:12.199346573 +0000 UTC m=+12.303036183" lastFinishedPulling="2025-06-20 18:24:23.678148994 +0000 UTC m=+23.781838604" observedRunningTime="2025-06-20 18:24:24.277386716 +0000 UTC m=+24.381076326" watchObservedRunningTime="2025-06-20 18:24:24.277545609 +0000 UTC m=+24.381235219" Jun 20 18:24:24.287122 containerd[1886]: time="2025-06-20T18:24:24.287080237Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\"" Jun 20 18:24:24.288102 containerd[1886]: time="2025-06-20T18:24:24.288073658Z" level=info msg="StartContainer for \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\"" Jun 20 18:24:24.290275 containerd[1886]: time="2025-06-20T18:24:24.290250979Z" level=info msg="connecting to shim 2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05" address="unix:///run/containerd/s/616863a89d9ec749c7cf41d0b44f267a3ddd7692bf826e9e656630de9645cefc" protocol=ttrpc version=3 Jun 20 18:24:24.312405 systemd[1]: Started cri-containerd-2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05.scope - libcontainer container 2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05. Jun 20 18:24:24.349960 systemd[1]: cri-containerd-2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05.scope: Deactivated successfully. Jun 20 18:24:24.351759 containerd[1886]: time="2025-06-20T18:24:24.351675597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\" id:\"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\" pid:3947 exited_at:{seconds:1750443864 nanos:351418414}" Jun 20 18:24:24.356420 containerd[1886]: time="2025-06-20T18:24:24.356380209Z" level=info msg="received exit event container_id:\"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\" id:\"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\" pid:3947 exited_at:{seconds:1750443864 nanos:351418414}" Jun 20 18:24:24.365242 containerd[1886]: time="2025-06-20T18:24:24.365215904Z" level=info msg="StartContainer for \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\" returns successfully" Jun 20 18:24:24.375844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05-rootfs.mount: Deactivated successfully. Jun 20 18:24:25.257344 containerd[1886]: time="2025-06-20T18:24:25.256620178Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:24:25.288597 containerd[1886]: time="2025-06-20T18:24:25.288560904Z" level=info msg="Container 14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:25.290529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982350843.mount: Deactivated successfully. Jun 20 18:24:25.306153 containerd[1886]: time="2025-06-20T18:24:25.306118154Z" level=info msg="CreateContainer within sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\"" Jun 20 18:24:25.307029 containerd[1886]: time="2025-06-20T18:24:25.306755469Z" level=info msg="StartContainer for \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\"" Jun 20 18:24:25.307705 containerd[1886]: time="2025-06-20T18:24:25.307657520Z" level=info msg="connecting to shim 14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9" address="unix:///run/containerd/s/616863a89d9ec749c7cf41d0b44f267a3ddd7692bf826e9e656630de9645cefc" protocol=ttrpc version=3 Jun 20 18:24:25.327233 systemd[1]: Started cri-containerd-14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9.scope - libcontainer container 14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9. Jun 20 18:24:25.352683 containerd[1886]: time="2025-06-20T18:24:25.352494973Z" level=info msg="StartContainer for \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" returns successfully" Jun 20 18:24:25.403551 containerd[1886]: time="2025-06-20T18:24:25.403516618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"e62a66f84cee2fb4e722e54f9d0a1d7d6e89baa0dde9dfea0958711b96c48997\" pid:4015 exited_at:{seconds:1750443865 nanos:402983282}" Jun 20 18:24:25.500419 kubelet[3378]: I0620 18:24:25.500390 3378 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:24:25.537020 systemd[1]: Created slice kubepods-burstable-poda0be5e0d_be7b_4afb_b328_a7f238c72a77.slice - libcontainer container kubepods-burstable-poda0be5e0d_be7b_4afb_b328_a7f238c72a77.slice. Jun 20 18:24:25.545725 systemd[1]: Created slice kubepods-burstable-pod5d422d83_a658_4cd8_909a_4b8b61047acd.slice - libcontainer container kubepods-burstable-pod5d422d83_a658_4cd8_909a_4b8b61047acd.slice. Jun 20 18:24:25.599621 kubelet[3378]: I0620 18:24:25.599577 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d422d83-a658-4cd8-909a-4b8b61047acd-config-volume\") pod \"coredns-668d6bf9bc-sqmqv\" (UID: \"5d422d83-a658-4cd8-909a-4b8b61047acd\") " pod="kube-system/coredns-668d6bf9bc-sqmqv" Jun 20 18:24:25.599621 kubelet[3378]: I0620 18:24:25.599625 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0be5e0d-be7b-4afb-b328-a7f238c72a77-config-volume\") pod \"coredns-668d6bf9bc-lngn4\" (UID: \"a0be5e0d-be7b-4afb-b328-a7f238c72a77\") " pod="kube-system/coredns-668d6bf9bc-lngn4" Jun 20 18:24:25.599782 kubelet[3378]: I0620 18:24:25.599640 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn9vm\" (UniqueName: \"kubernetes.io/projected/5d422d83-a658-4cd8-909a-4b8b61047acd-kube-api-access-gn9vm\") pod \"coredns-668d6bf9bc-sqmqv\" (UID: \"5d422d83-a658-4cd8-909a-4b8b61047acd\") " pod="kube-system/coredns-668d6bf9bc-sqmqv" Jun 20 18:24:25.599782 kubelet[3378]: I0620 18:24:25.599652 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwk4b\" (UniqueName: \"kubernetes.io/projected/a0be5e0d-be7b-4afb-b328-a7f238c72a77-kube-api-access-mwk4b\") pod \"coredns-668d6bf9bc-lngn4\" (UID: \"a0be5e0d-be7b-4afb-b328-a7f238c72a77\") " pod="kube-system/coredns-668d6bf9bc-lngn4" Jun 20 18:24:25.844636 containerd[1886]: time="2025-06-20T18:24:25.844549148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lngn4,Uid:a0be5e0d-be7b-4afb-b328-a7f238c72a77,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:25.849164 containerd[1886]: time="2025-06-20T18:24:25.849120964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sqmqv,Uid:5d422d83-a658-4cd8-909a-4b8b61047acd,Namespace:kube-system,Attempt:0,}" Jun 20 18:24:26.274640 kubelet[3378]: I0620 18:24:26.274587 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bnr6q" podStartSLOduration=12.131399745 podStartE2EDuration="20.274573871s" podCreationTimestamp="2025-06-20 18:24:06 +0000 UTC" firstStartedPulling="2025-06-20 18:24:12.166376744 +0000 UTC m=+12.270066354" lastFinishedPulling="2025-06-20 18:24:20.309550869 +0000 UTC m=+20.413240480" observedRunningTime="2025-06-20 18:24:26.273626243 +0000 UTC m=+26.377315853" watchObservedRunningTime="2025-06-20 18:24:26.274573871 +0000 UTC m=+26.378263481" Jun 20 18:24:26.625718 containerd[1886]: time="2025-06-20T18:24:26.625609245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"ed13a4242f10d340cbd00b4925b4f56a3f380a731b3565381f776c5d50a15a2b\" pid:4117 exit_status:1 exited_at:{seconds:1750443866 nanos:625319125}" Jun 20 18:24:27.496459 systemd-networkd[1482]: cilium_host: Link UP Jun 20 18:24:27.497858 systemd-networkd[1482]: cilium_net: Link UP Jun 20 18:24:27.498591 systemd-networkd[1482]: cilium_net: Gained carrier Jun 20 18:24:27.498777 systemd-networkd[1482]: cilium_host: Gained carrier Jun 20 18:24:27.686228 systemd-networkd[1482]: cilium_vxlan: Link UP Jun 20 18:24:27.686233 systemd-networkd[1482]: cilium_vxlan: Gained carrier Jun 20 18:24:27.780142 systemd-networkd[1482]: cilium_host: Gained IPv6LL Jun 20 18:24:27.988161 systemd-networkd[1482]: cilium_net: Gained IPv6LL Jun 20 18:24:28.021804 kernel: NET: Registered PF_ALG protocol family Jun 20 18:24:28.704366 containerd[1886]: time="2025-06-20T18:24:28.704331680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"7940802629ca2b1a7aedf7a20d5c0bff30a85f21a905f2a6c81496f02a51dbb5\" pid:4455 exit_status:1 exited_at:{seconds:1750443868 nanos:703353923}" Jun 20 18:24:28.730806 systemd-networkd[1482]: lxc_health: Link UP Jun 20 18:24:28.737606 systemd-networkd[1482]: lxc_health: Gained carrier Jun 20 18:24:28.900175 systemd-networkd[1482]: lxcfa190f598e7a: Link UP Jun 20 18:24:28.904049 kernel: eth0: renamed from tmp5e147 Jun 20 18:24:28.907243 systemd-networkd[1482]: lxcfa190f598e7a: Gained carrier Jun 20 18:24:28.922733 systemd-networkd[1482]: lxc8956322c8b15: Link UP Jun 20 18:24:28.926481 kernel: eth0: renamed from tmpfe2fe Jun 20 18:24:28.928242 systemd-networkd[1482]: lxc8956322c8b15: Gained carrier Jun 20 18:24:29.284183 systemd-networkd[1482]: cilium_vxlan: Gained IPv6LL Jun 20 18:24:30.052186 systemd-networkd[1482]: lxc_health: Gained IPv6LL Jun 20 18:24:30.308171 systemd-networkd[1482]: lxcfa190f598e7a: Gained IPv6LL Jun 20 18:24:30.436136 systemd-networkd[1482]: lxc8956322c8b15: Gained IPv6LL Jun 20 18:24:30.781818 containerd[1886]: time="2025-06-20T18:24:30.781659903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"86c09b8007134c87a10f9347c0e4285a387105f98e06855468b504634188d60c\" pid:4546 exited_at:{seconds:1750443870 nanos:781435809}" Jun 20 18:24:31.461430 containerd[1886]: time="2025-06-20T18:24:31.461360793Z" level=info msg="connecting to shim fe2fe540f56a6a61d0dc0630092dd929de63a0039220421ff057042470d3064f" address="unix:///run/containerd/s/8b71cc162141b7cd9abd18309cab397f5d295390d0482d58c619fa504a36ef7d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:31.462279 containerd[1886]: time="2025-06-20T18:24:31.462225455Z" level=info msg="connecting to shim 5e147f9d293d30a4e14335a16a3108f7773e0eb302924f7476a2b763cd8de280" address="unix:///run/containerd/s/bcc655791e6a9ea97b39ca87505eeff57fb606e9b26dbea1f426ce02c2d62086" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:24:31.487127 systemd[1]: Started cri-containerd-5e147f9d293d30a4e14335a16a3108f7773e0eb302924f7476a2b763cd8de280.scope - libcontainer container 5e147f9d293d30a4e14335a16a3108f7773e0eb302924f7476a2b763cd8de280. Jun 20 18:24:31.487975 systemd[1]: Started cri-containerd-fe2fe540f56a6a61d0dc0630092dd929de63a0039220421ff057042470d3064f.scope - libcontainer container fe2fe540f56a6a61d0dc0630092dd929de63a0039220421ff057042470d3064f. Jun 20 18:24:31.523935 containerd[1886]: time="2025-06-20T18:24:31.523898883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lngn4,Uid:a0be5e0d-be7b-4afb-b328-a7f238c72a77,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e147f9d293d30a4e14335a16a3108f7773e0eb302924f7476a2b763cd8de280\"" Jun 20 18:24:31.527063 containerd[1886]: time="2025-06-20T18:24:31.526990861Z" level=info msg="CreateContainer within sandbox \"5e147f9d293d30a4e14335a16a3108f7773e0eb302924f7476a2b763cd8de280\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:24:31.527797 containerd[1886]: time="2025-06-20T18:24:31.527766642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sqmqv,Uid:5d422d83-a658-4cd8-909a-4b8b61047acd,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe2fe540f56a6a61d0dc0630092dd929de63a0039220421ff057042470d3064f\"" Jun 20 18:24:31.536069 containerd[1886]: time="2025-06-20T18:24:31.536038909Z" level=info msg="CreateContainer within sandbox \"fe2fe540f56a6a61d0dc0630092dd929de63a0039220421ff057042470d3064f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:24:31.560521 containerd[1886]: time="2025-06-20T18:24:31.560486557Z" level=info msg="Container efc8eab625b419d69322847694c1e26a1ce4a8a445e7297019d11fd7f902137f: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:31.566023 containerd[1886]: time="2025-06-20T18:24:31.565929710Z" level=info msg="Container 5522d05abd32f360afb626130594e75d7b0c36f56312a1b005ed038f01fae5a5: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:24:31.578903 containerd[1886]: time="2025-06-20T18:24:31.578872853Z" level=info msg="CreateContainer within sandbox \"5e147f9d293d30a4e14335a16a3108f7773e0eb302924f7476a2b763cd8de280\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efc8eab625b419d69322847694c1e26a1ce4a8a445e7297019d11fd7f902137f\"" Jun 20 18:24:31.579767 containerd[1886]: time="2025-06-20T18:24:31.579704107Z" level=info msg="StartContainer for \"efc8eab625b419d69322847694c1e26a1ce4a8a445e7297019d11fd7f902137f\"" Jun 20 18:24:31.580441 containerd[1886]: time="2025-06-20T18:24:31.580382973Z" level=info msg="connecting to shim efc8eab625b419d69322847694c1e26a1ce4a8a445e7297019d11fd7f902137f" address="unix:///run/containerd/s/bcc655791e6a9ea97b39ca87505eeff57fb606e9b26dbea1f426ce02c2d62086" protocol=ttrpc version=3 Jun 20 18:24:31.590691 containerd[1886]: time="2025-06-20T18:24:31.590657357Z" level=info msg="CreateContainer within sandbox \"fe2fe540f56a6a61d0dc0630092dd929de63a0039220421ff057042470d3064f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5522d05abd32f360afb626130594e75d7b0c36f56312a1b005ed038f01fae5a5\"" Jun 20 18:24:31.592393 containerd[1886]: time="2025-06-20T18:24:31.592303273Z" level=info msg="StartContainer for \"5522d05abd32f360afb626130594e75d7b0c36f56312a1b005ed038f01fae5a5\"" Jun 20 18:24:31.594135 containerd[1886]: time="2025-06-20T18:24:31.594108729Z" level=info msg="connecting to shim 5522d05abd32f360afb626130594e75d7b0c36f56312a1b005ed038f01fae5a5" address="unix:///run/containerd/s/8b71cc162141b7cd9abd18309cab397f5d295390d0482d58c619fa504a36ef7d" protocol=ttrpc version=3 Jun 20 18:24:31.597138 systemd[1]: Started cri-containerd-efc8eab625b419d69322847694c1e26a1ce4a8a445e7297019d11fd7f902137f.scope - libcontainer container efc8eab625b419d69322847694c1e26a1ce4a8a445e7297019d11fd7f902137f. Jun 20 18:24:31.609106 systemd[1]: Started cri-containerd-5522d05abd32f360afb626130594e75d7b0c36f56312a1b005ed038f01fae5a5.scope - libcontainer container 5522d05abd32f360afb626130594e75d7b0c36f56312a1b005ed038f01fae5a5. Jun 20 18:24:31.634470 containerd[1886]: time="2025-06-20T18:24:31.634439111Z" level=info msg="StartContainer for \"efc8eab625b419d69322847694c1e26a1ce4a8a445e7297019d11fd7f902137f\" returns successfully" Jun 20 18:24:31.640941 containerd[1886]: time="2025-06-20T18:24:31.640861793Z" level=info msg="StartContainer for \"5522d05abd32f360afb626130594e75d7b0c36f56312a1b005ed038f01fae5a5\" returns successfully" Jun 20 18:24:32.289861 kubelet[3378]: I0620 18:24:32.289801 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sqmqv" podStartSLOduration=27.28978717 podStartE2EDuration="27.28978717s" podCreationTimestamp="2025-06-20 18:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:32.287547751 +0000 UTC m=+32.391237361" watchObservedRunningTime="2025-06-20 18:24:32.28978717 +0000 UTC m=+32.393476780" Jun 20 18:24:32.306340 kubelet[3378]: I0620 18:24:32.306210 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lngn4" podStartSLOduration=27.306198101 podStartE2EDuration="27.306198101s" podCreationTimestamp="2025-06-20 18:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:24:32.304080037 +0000 UTC m=+32.407769687" watchObservedRunningTime="2025-06-20 18:24:32.306198101 +0000 UTC m=+32.409887711" Jun 20 18:24:32.451634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7250353.mount: Deactivated successfully. Jun 20 18:24:32.850853 containerd[1886]: time="2025-06-20T18:24:32.850800912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"7ae1d70c5ed008ad2b9d141e2a2367e5adca1cb377237e4988bc36867b9d48e6\" pid:4738 exited_at:{seconds:1750443872 nanos:850204496}" Jun 20 18:24:34.917436 containerd[1886]: time="2025-06-20T18:24:34.917394342Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"88dbc61906dd448dbd09a87d76aba6bc76cb1d904c40fa05ba6e48c6a6a175cc\" pid:4763 exited_at:{seconds:1750443874 nanos:916955562}" Jun 20 18:24:35.019206 containerd[1886]: time="2025-06-20T18:24:35.019168177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"8ae744d91463e1da0e5223a4a1feef5c7914fe6fa7cb96c03d02a5017db4b8b9\" pid:4793 exited_at:{seconds:1750443875 nanos:18842248}" Jun 20 18:24:35.255883 sudo[2352]: pam_unix(sudo:session): session closed for user root Jun 20 18:24:35.343221 sshd[2351]: Connection closed by 10.200.16.10 port 36134 Jun 20 18:24:35.343841 sshd-session[2349]: pam_unix(sshd:session): session closed for user core Jun 20 18:24:35.347234 systemd[1]: sshd@6-10.200.20.48:22-10.200.16.10:36134.service: Deactivated successfully. Jun 20 18:24:35.348798 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:24:35.349033 systemd[1]: session-9.scope: Consumed 3.412s CPU time, 269.4M memory peak. Jun 20 18:24:35.349947 systemd-logind[1869]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:24:35.351335 systemd-logind[1869]: Removed session 9. Jun 20 18:26:38.626913 systemd[1]: Started sshd@7-10.200.20.48:22-10.200.16.10:52378.service - OpenSSH per-connection server daemon (10.200.16.10:52378). Jun 20 18:26:39.079023 sshd[4841]: Accepted publickey for core from 10.200.16.10 port 52378 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:39.080106 sshd-session[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:39.083862 systemd-logind[1869]: New session 10 of user core. Jun 20 18:26:39.092139 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:26:39.551065 sshd[4843]: Connection closed by 10.200.16.10 port 52378 Jun 20 18:26:39.551622 sshd-session[4841]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:39.554772 systemd[1]: sshd@7-10.200.20.48:22-10.200.16.10:52378.service: Deactivated successfully. Jun 20 18:26:39.556831 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:26:39.558058 systemd-logind[1869]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:26:39.559814 systemd-logind[1869]: Removed session 10. Jun 20 18:26:44.651198 systemd[1]: Started sshd@8-10.200.20.48:22-10.200.16.10:52380.service - OpenSSH per-connection server daemon (10.200.16.10:52380). Jun 20 18:26:45.136778 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 52380 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:45.137828 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:45.141270 systemd-logind[1869]: New session 11 of user core. Jun 20 18:26:45.147220 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:26:45.536325 sshd[4859]: Connection closed by 10.200.16.10 port 52380 Jun 20 18:26:45.535673 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:45.538187 systemd-logind[1869]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:26:45.538710 systemd[1]: sshd@8-10.200.20.48:22-10.200.16.10:52380.service: Deactivated successfully. Jun 20 18:26:45.540665 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:26:45.542967 systemd-logind[1869]: Removed session 11. Jun 20 18:26:50.626188 systemd[1]: Started sshd@9-10.200.20.48:22-10.200.16.10:34378.service - OpenSSH per-connection server daemon (10.200.16.10:34378). Jun 20 18:26:51.094523 sshd[4871]: Accepted publickey for core from 10.200.16.10 port 34378 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:51.095613 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:51.099442 systemd-logind[1869]: New session 12 of user core. Jun 20 18:26:51.108124 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:26:51.485029 sshd[4873]: Connection closed by 10.200.16.10 port 34378 Jun 20 18:26:51.485206 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:51.488430 systemd[1]: sshd@9-10.200.20.48:22-10.200.16.10:34378.service: Deactivated successfully. Jun 20 18:26:51.489977 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:26:51.490631 systemd-logind[1869]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:26:51.491757 systemd-logind[1869]: Removed session 12. Jun 20 18:26:56.593353 systemd[1]: Started sshd@10-10.200.20.48:22-10.200.16.10:34394.service - OpenSSH per-connection server daemon (10.200.16.10:34394). Jun 20 18:26:57.060748 sshd[4886]: Accepted publickey for core from 10.200.16.10 port 34394 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:57.061810 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:57.065267 systemd-logind[1869]: New session 13 of user core. Jun 20 18:26:57.074204 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:26:57.450043 sshd[4888]: Connection closed by 10.200.16.10 port 34394 Jun 20 18:26:57.450619 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:57.453633 systemd[1]: sshd@10-10.200.20.48:22-10.200.16.10:34394.service: Deactivated successfully. Jun 20 18:26:57.455300 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:26:57.455943 systemd-logind[1869]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:26:57.457096 systemd-logind[1869]: Removed session 13. Jun 20 18:26:57.534575 systemd[1]: Started sshd@11-10.200.20.48:22-10.200.16.10:34396.service - OpenSSH per-connection server daemon (10.200.16.10:34396). Jun 20 18:26:58.005807 sshd[4900]: Accepted publickey for core from 10.200.16.10 port 34396 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:58.006869 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:58.010592 systemd-logind[1869]: New session 14 of user core. Jun 20 18:26:58.017106 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:26:58.423302 sshd[4902]: Connection closed by 10.200.16.10 port 34396 Jun 20 18:26:58.423150 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:58.426518 systemd-logind[1869]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:26:58.427135 systemd[1]: sshd@11-10.200.20.48:22-10.200.16.10:34396.service: Deactivated successfully. Jun 20 18:26:58.428576 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:26:58.430448 systemd-logind[1869]: Removed session 14. Jun 20 18:26:58.509384 update_engine[1875]: I20250620 18:26:58.509059 1875 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 18:26:58.509384 update_engine[1875]: I20250620 18:26:58.509092 1875 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 18:26:58.509384 update_engine[1875]: I20250620 18:26:58.509227 1875 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 18:26:58.509822 update_engine[1875]: I20250620 18:26:58.509507 1875 omaha_request_params.cc:62] Current group set to beta Jun 20 18:26:58.509822 update_engine[1875]: I20250620 18:26:58.509578 1875 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 18:26:58.509822 update_engine[1875]: I20250620 18:26:58.509585 1875 update_attempter.cc:643] Scheduling an action processor start. Jun 20 18:26:58.509822 update_engine[1875]: I20250620 18:26:58.509598 1875 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:26:58.509822 update_engine[1875]: I20250620 18:26:58.509619 1875 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 18:26:58.510200 locksmithd[1946]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 18:26:58.510508 update_engine[1875]: I20250620 18:26:58.509661 1875 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:26:58.510508 update_engine[1875]: I20250620 18:26:58.510196 1875 omaha_request_action.cc:272] Request: Jun 20 18:26:58.510508 update_engine[1875]: Jun 20 18:26:58.510508 update_engine[1875]: Jun 20 18:26:58.510508 update_engine[1875]: Jun 20 18:26:58.510508 update_engine[1875]: Jun 20 18:26:58.510508 update_engine[1875]: Jun 20 18:26:58.510508 update_engine[1875]: Jun 20 18:26:58.510508 update_engine[1875]: Jun 20 18:26:58.510508 update_engine[1875]: Jun 20 18:26:58.510508 update_engine[1875]: I20250620 18:26:58.510216 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:26:58.511028 update_engine[1875]: I20250620 18:26:58.510985 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:26:58.512404 update_engine[1875]: I20250620 18:26:58.511904 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:26:58.515204 systemd[1]: Started sshd@12-10.200.20.48:22-10.200.16.10:45542.service - OpenSSH per-connection server daemon (10.200.16.10:45542). Jun 20 18:26:58.557867 update_engine[1875]: E20250620 18:26:58.557825 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:26:58.557950 update_engine[1875]: I20250620 18:26:58.557897 1875 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 18:26:58.986795 sshd[4912]: Accepted publickey for core from 10.200.16.10 port 45542 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:58.989287 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:58.993081 systemd-logind[1869]: New session 15 of user core. Jun 20 18:26:59.003118 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:26:59.379089 sshd[4914]: Connection closed by 10.200.16.10 port 45542 Jun 20 18:26:59.379679 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:59.382694 systemd[1]: sshd@12-10.200.20.48:22-10.200.16.10:45542.service: Deactivated successfully. Jun 20 18:26:59.384142 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:26:59.384755 systemd-logind[1869]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:26:59.386266 systemd-logind[1869]: Removed session 15. Jun 20 18:27:04.460843 systemd[1]: Started sshd@13-10.200.20.48:22-10.200.16.10:45544.service - OpenSSH per-connection server daemon (10.200.16.10:45544). Jun 20 18:27:04.912276 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 45544 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:04.913352 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:04.917316 systemd-logind[1869]: New session 16 of user core. Jun 20 18:27:04.925115 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:27:05.280580 sshd[4930]: Connection closed by 10.200.16.10 port 45544 Jun 20 18:27:05.281081 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:05.283937 systemd[1]: sshd@13-10.200.20.48:22-10.200.16.10:45544.service: Deactivated successfully. Jun 20 18:27:05.286353 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:27:05.287133 systemd-logind[1869]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:27:05.288406 systemd-logind[1869]: Removed session 16. Jun 20 18:27:05.377548 systemd[1]: Started sshd@14-10.200.20.48:22-10.200.16.10:45556.service - OpenSSH per-connection server daemon (10.200.16.10:45556). Jun 20 18:27:05.862274 sshd[4942]: Accepted publickey for core from 10.200.16.10 port 45556 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:05.863420 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:05.868088 systemd-logind[1869]: New session 17 of user core. Jun 20 18:27:05.874122 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:27:06.287081 sshd[4944]: Connection closed by 10.200.16.10 port 45556 Jun 20 18:27:06.287604 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:06.290789 systemd[1]: sshd@14-10.200.20.48:22-10.200.16.10:45556.service: Deactivated successfully. Jun 20 18:27:06.292225 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:27:06.292816 systemd-logind[1869]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:27:06.294386 systemd-logind[1869]: Removed session 17. Jun 20 18:27:06.368406 systemd[1]: Started sshd@15-10.200.20.48:22-10.200.16.10:45560.service - OpenSSH per-connection server daemon (10.200.16.10:45560). Jun 20 18:27:06.820284 sshd[4953]: Accepted publickey for core from 10.200.16.10 port 45560 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:06.821305 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:06.824878 systemd-logind[1869]: New session 18 of user core. Jun 20 18:27:06.831214 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:27:07.867032 sshd[4955]: Connection closed by 10.200.16.10 port 45560 Jun 20 18:27:07.867587 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:07.871134 systemd-logind[1869]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:27:07.871495 systemd[1]: sshd@15-10.200.20.48:22-10.200.16.10:45560.service: Deactivated successfully. Jun 20 18:27:07.873024 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:27:07.874697 systemd-logind[1869]: Removed session 18. Jun 20 18:27:07.949752 systemd[1]: Started sshd@16-10.200.20.48:22-10.200.16.10:45564.service - OpenSSH per-connection server daemon (10.200.16.10:45564). Jun 20 18:27:08.403346 sshd[4972]: Accepted publickey for core from 10.200.16.10 port 45564 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:08.404359 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:08.408012 systemd-logind[1869]: New session 19 of user core. Jun 20 18:27:08.413284 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:27:08.509183 update_engine[1875]: I20250620 18:27:08.509121 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:27:08.509971 update_engine[1875]: I20250620 18:27:08.509658 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:27:08.509971 update_engine[1875]: I20250620 18:27:08.509929 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:27:08.612507 update_engine[1875]: E20250620 18:27:08.612452 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:27:08.612702 update_engine[1875]: I20250620 18:27:08.612685 1875 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 18:27:08.852260 sshd[4974]: Connection closed by 10.200.16.10 port 45564 Jun 20 18:27:08.852840 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:08.856454 systemd[1]: sshd@16-10.200.20.48:22-10.200.16.10:45564.service: Deactivated successfully. Jun 20 18:27:08.857941 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:27:08.858565 systemd-logind[1869]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:27:08.859837 systemd-logind[1869]: Removed session 19. Jun 20 18:27:08.943635 systemd[1]: Started sshd@17-10.200.20.48:22-10.200.16.10:45358.service - OpenSSH per-connection server daemon (10.200.16.10:45358). Jun 20 18:27:09.428235 sshd[4983]: Accepted publickey for core from 10.200.16.10 port 45358 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:09.429255 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:09.432725 systemd-logind[1869]: New session 20 of user core. Jun 20 18:27:09.440119 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:27:09.824862 sshd[4985]: Connection closed by 10.200.16.10 port 45358 Jun 20 18:27:09.993143 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:09.996252 systemd[1]: sshd@17-10.200.20.48:22-10.200.16.10:45358.service: Deactivated successfully. Jun 20 18:27:09.997844 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:27:09.998469 systemd-logind[1869]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:27:09.999609 systemd-logind[1869]: Removed session 20. Jun 20 18:27:14.907533 systemd[1]: Started sshd@18-10.200.20.48:22-10.200.16.10:45374.service - OpenSSH per-connection server daemon (10.200.16.10:45374). Jun 20 18:27:15.355384 sshd[5001]: Accepted publickey for core from 10.200.16.10 port 45374 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:15.356558 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:15.360317 systemd-logind[1869]: New session 21 of user core. Jun 20 18:27:15.369128 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:27:15.717206 sshd[5003]: Connection closed by 10.200.16.10 port 45374 Jun 20 18:27:15.717734 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:15.720540 systemd-logind[1869]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:27:15.721827 systemd[1]: sshd@18-10.200.20.48:22-10.200.16.10:45374.service: Deactivated successfully. Jun 20 18:27:15.723581 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:27:15.725225 systemd-logind[1869]: Removed session 21. Jun 20 18:27:18.511902 update_engine[1875]: I20250620 18:27:18.511824 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:27:18.512319 update_engine[1875]: I20250620 18:27:18.512074 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:27:18.512366 update_engine[1875]: I20250620 18:27:18.512336 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:27:18.560047 update_engine[1875]: E20250620 18:27:18.559990 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:27:18.560171 update_engine[1875]: I20250620 18:27:18.560063 1875 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 18:27:20.811951 systemd[1]: Started sshd@19-10.200.20.48:22-10.200.16.10:56130.service - OpenSSH per-connection server daemon (10.200.16.10:56130). Jun 20 18:27:21.298344 sshd[5014]: Accepted publickey for core from 10.200.16.10 port 56130 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:21.299441 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:21.303200 systemd-logind[1869]: New session 22 of user core. Jun 20 18:27:21.314300 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:27:21.696849 sshd[5016]: Connection closed by 10.200.16.10 port 56130 Jun 20 18:27:21.697460 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:21.700551 systemd[1]: sshd@19-10.200.20.48:22-10.200.16.10:56130.service: Deactivated successfully. Jun 20 18:27:21.701909 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:27:21.702525 systemd-logind[1869]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:27:21.703661 systemd-logind[1869]: Removed session 22. Jun 20 18:27:26.779791 systemd[1]: Started sshd@20-10.200.20.48:22-10.200.16.10:56146.service - OpenSSH per-connection server daemon (10.200.16.10:56146). Jun 20 18:27:27.237117 sshd[5028]: Accepted publickey for core from 10.200.16.10 port 56146 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:27.238235 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:27.242169 systemd-logind[1869]: New session 23 of user core. Jun 20 18:27:27.247111 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:27:27.606849 sshd[5030]: Connection closed by 10.200.16.10 port 56146 Jun 20 18:27:27.607584 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:27.610674 systemd[1]: sshd@20-10.200.20.48:22-10.200.16.10:56146.service: Deactivated successfully. Jun 20 18:27:27.612087 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:27:27.612723 systemd-logind[1869]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:27:27.613798 systemd-logind[1869]: Removed session 23. Jun 20 18:27:27.692383 systemd[1]: Started sshd@21-10.200.20.48:22-10.200.16.10:56156.service - OpenSSH per-connection server daemon (10.200.16.10:56156). Jun 20 18:27:28.163701 sshd[5042]: Accepted publickey for core from 10.200.16.10 port 56156 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:28.164775 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:28.168445 systemd-logind[1869]: New session 24 of user core. Jun 20 18:27:28.173182 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:27:28.509484 update_engine[1875]: I20250620 18:27:28.509025 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:27:28.509484 update_engine[1875]: I20250620 18:27:28.509225 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:27:28.509484 update_engine[1875]: I20250620 18:27:28.509441 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:27:28.611789 update_engine[1875]: E20250620 18:27:28.611740 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.611964 1875 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.611979 1875 omaha_request_action.cc:617] Omaha request response: Jun 20 18:27:28.612965 update_engine[1875]: E20250620 18:27:28.612099 1875 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612118 1875 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612121 1875 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612124 1875 update_attempter.cc:306] Processing Done. Jun 20 18:27:28.612965 update_engine[1875]: E20250620 18:27:28.612136 1875 update_attempter.cc:619] Update failed. Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612141 1875 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612144 1875 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612147 1875 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612244 1875 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612267 1875 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:27:28.612965 update_engine[1875]: I20250620 18:27:28.612270 1875 omaha_request_action.cc:272] Request: Jun 20 18:27:28.612965 update_engine[1875]: Jun 20 18:27:28.612965 update_engine[1875]: Jun 20 18:27:28.612965 update_engine[1875]: Jun 20 18:27:28.612965 update_engine[1875]: Jun 20 18:27:28.612965 update_engine[1875]: Jun 20 18:27:28.612965 update_engine[1875]: Jun 20 18:27:28.613577 update_engine[1875]: I20250620 18:27:28.612274 1875 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:27:28.613577 update_engine[1875]: I20250620 18:27:28.612403 1875 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:27:28.613577 update_engine[1875]: I20250620 18:27:28.612602 1875 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:27:28.613620 locksmithd[1946]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 18:27:28.628815 update_engine[1875]: E20250620 18:27:28.628679 1875 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:27:28.628815 update_engine[1875]: I20250620 18:27:28.628723 1875 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:27:28.628815 update_engine[1875]: I20250620 18:27:28.628729 1875 omaha_request_action.cc:617] Omaha request response: Jun 20 18:27:28.628815 update_engine[1875]: I20250620 18:27:28.628734 1875 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:27:28.628815 update_engine[1875]: I20250620 18:27:28.628737 1875 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:27:28.628815 update_engine[1875]: I20250620 18:27:28.628741 1875 update_attempter.cc:306] Processing Done. Jun 20 18:27:28.628815 update_engine[1875]: I20250620 18:27:28.628743 1875 update_attempter.cc:310] Error event sent. Jun 20 18:27:28.628815 update_engine[1875]: I20250620 18:27:28.628751 1875 update_check_scheduler.cc:74] Next update check in 45m14s Jun 20 18:27:28.630024 locksmithd[1946]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 18:27:29.722657 containerd[1886]: time="2025-06-20T18:27:29.722610362Z" level=info msg="StopContainer for \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" with timeout 30 (s)" Jun 20 18:27:29.724124 containerd[1886]: time="2025-06-20T18:27:29.724082592Z" level=info msg="Stop container \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" with signal terminated" Jun 20 18:27:29.735751 containerd[1886]: time="2025-06-20T18:27:29.735724097Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:27:29.738702 systemd[1]: cri-containerd-434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0.scope: Deactivated successfully. Jun 20 18:27:29.741109 containerd[1886]: time="2025-06-20T18:27:29.740999271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" id:\"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" pid:3915 exited_at:{seconds:1750444049 nanos:740630574}" Jun 20 18:27:29.741192 containerd[1886]: time="2025-06-20T18:27:29.741138283Z" level=info msg="received exit event container_id:\"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" id:\"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" pid:3915 exited_at:{seconds:1750444049 nanos:740630574}" Jun 20 18:27:29.742747 containerd[1886]: time="2025-06-20T18:27:29.742718819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"25884b4981d6918672cd932794f0ee6c3a7f80d434e2538dd741b41f16a96964\" pid:5062 exited_at:{seconds:1750444049 nanos:742508030}" Jun 20 18:27:29.745045 containerd[1886]: time="2025-06-20T18:27:29.744834017Z" level=info msg="StopContainer for \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" with timeout 2 (s)" Jun 20 18:27:29.745122 containerd[1886]: time="2025-06-20T18:27:29.745103696Z" level=info msg="Stop container \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" with signal terminated" Jun 20 18:27:29.751870 systemd-networkd[1482]: lxc_health: Link DOWN Jun 20 18:27:29.752053 systemd-networkd[1482]: lxc_health: Lost carrier Jun 20 18:27:29.764493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0-rootfs.mount: Deactivated successfully. Jun 20 18:27:29.766469 containerd[1886]: time="2025-06-20T18:27:29.766447769Z" level=info msg="received exit event container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" pid:3982 exited_at:{seconds:1750444049 nanos:766210474}" Jun 20 18:27:29.766592 systemd[1]: cri-containerd-14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9.scope: Deactivated successfully. Jun 20 18:27:29.766750 containerd[1886]: time="2025-06-20T18:27:29.766490354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" id:\"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" pid:3982 exited_at:{seconds:1750444049 nanos:766210474}" Jun 20 18:27:29.768679 systemd[1]: cri-containerd-14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9.scope: Consumed 4.550s CPU time, 138.4M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:27:29.780704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9-rootfs.mount: Deactivated successfully. Jun 20 18:27:29.823073 containerd[1886]: time="2025-06-20T18:27:29.823024852Z" level=info msg="StopContainer for \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" returns successfully" Jun 20 18:27:29.824102 containerd[1886]: time="2025-06-20T18:27:29.823994709Z" level=info msg="StopPodSandbox for \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\"" Jun 20 18:27:29.824102 containerd[1886]: time="2025-06-20T18:27:29.824068791Z" level=info msg="Container to stop \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:27:29.824102 containerd[1886]: time="2025-06-20T18:27:29.824077343Z" level=info msg="Container to stop \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:27:29.824102 containerd[1886]: time="2025-06-20T18:27:29.824082823Z" level=info msg="Container to stop \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:27:29.824102 containerd[1886]: time="2025-06-20T18:27:29.824088543Z" level=info msg="Container to stop \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:27:29.824318 containerd[1886]: time="2025-06-20T18:27:29.824093799Z" level=info msg="Container to stop \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:27:29.825303 containerd[1886]: time="2025-06-20T18:27:29.825220540Z" level=info msg="StopContainer for \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" returns successfully" Jun 20 18:27:29.825623 containerd[1886]: time="2025-06-20T18:27:29.825548316Z" level=info msg="StopPodSandbox for \"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\"" Jun 20 18:27:29.825623 containerd[1886]: time="2025-06-20T18:27:29.825586997Z" level=info msg="Container to stop \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:27:29.829389 systemd[1]: cri-containerd-52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d.scope: Deactivated successfully. Jun 20 18:27:29.831573 systemd[1]: cri-containerd-b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41.scope: Deactivated successfully. Jun 20 18:27:29.832991 containerd[1886]: time="2025-06-20T18:27:29.832966482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\" id:\"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\" pid:3538 exit_status:137 exited_at:{seconds:1750444049 nanos:832727876}" Jun 20 18:27:29.848288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d-rootfs.mount: Deactivated successfully. Jun 20 18:27:29.857752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41-rootfs.mount: Deactivated successfully. Jun 20 18:27:29.865800 containerd[1886]: time="2025-06-20T18:27:29.865744830Z" level=info msg="shim disconnected" id=52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d namespace=k8s.io Jun 20 18:27:29.865879 containerd[1886]: time="2025-06-20T18:27:29.865765575Z" level=warning msg="cleaning up after shim disconnected" id=52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d namespace=k8s.io Jun 20 18:27:29.865879 containerd[1886]: time="2025-06-20T18:27:29.865855705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:27:29.866556 containerd[1886]: time="2025-06-20T18:27:29.866444584Z" level=info msg="shim disconnected" id=b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41 namespace=k8s.io Jun 20 18:27:29.866556 containerd[1886]: time="2025-06-20T18:27:29.866461584Z" level=warning msg="cleaning up after shim disconnected" id=b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41 namespace=k8s.io Jun 20 18:27:29.866556 containerd[1886]: time="2025-06-20T18:27:29.866477417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:27:29.874702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d-shm.mount: Deactivated successfully. Jun 20 18:27:29.875204 containerd[1886]: time="2025-06-20T18:27:29.875056180Z" level=info msg="received exit event sandbox_id:\"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" exit_status:137 exited_at:{seconds:1750444049 nanos:834715590}" Jun 20 18:27:29.875394 containerd[1886]: time="2025-06-20T18:27:29.875372412Z" level=info msg="TearDown network for sandbox \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" successfully" Jun 20 18:27:29.875394 containerd[1886]: time="2025-06-20T18:27:29.875391092Z" level=info msg="StopPodSandbox for \"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" returns successfully" Jun 20 18:27:29.876440 containerd[1886]: time="2025-06-20T18:27:29.876317324Z" level=info msg="received exit event sandbox_id:\"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\" exit_status:137 exited_at:{seconds:1750444049 nanos:832727876}" Jun 20 18:27:29.876717 containerd[1886]: time="2025-06-20T18:27:29.876635852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" id:\"52c91ba6ee2f455637a70c9091231138cd6fa8412d7c78e9693539a4244b124d\" pid:3501 exit_status:137 exited_at:{seconds:1750444049 nanos:834715590}" Jun 20 18:27:29.877186 containerd[1886]: time="2025-06-20T18:27:29.877076495Z" level=info msg="TearDown network for sandbox \"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\" successfully" Jun 20 18:27:29.877186 containerd[1886]: time="2025-06-20T18:27:29.877096160Z" level=info msg="StopPodSandbox for \"b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41\" returns successfully" Jun 20 18:27:29.988465 kubelet[3378]: I0620 18:27:29.987901 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/651ded85-ffca-47b5-8312-40de593c296f-clustermesh-secrets\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988465 kubelet[3378]: I0620 18:27:29.987940 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cilium-run\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988465 kubelet[3378]: I0620 18:27:29.987953 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-host-proc-sys-kernel\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988465 kubelet[3378]: I0620 18:27:29.987965 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-etc-cni-netd\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988465 kubelet[3378]: I0620 18:27:29.987975 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-host-proc-sys-net\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988465 kubelet[3378]: I0620 18:27:29.987989 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbbp9\" (UniqueName: \"kubernetes.io/projected/651ded85-ffca-47b5-8312-40de593c296f-kube-api-access-pbbp9\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988826 kubelet[3378]: I0620 18:27:29.988018 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5mxs\" (UniqueName: \"kubernetes.io/projected/6242d19c-c967-435a-b3ba-42f41b4c0e7c-kube-api-access-z5mxs\") pod \"6242d19c-c967-435a-b3ba-42f41b4c0e7c\" (UID: \"6242d19c-c967-435a-b3ba-42f41b4c0e7c\") " Jun 20 18:27:29.988826 kubelet[3378]: I0620 18:27:29.988031 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/651ded85-ffca-47b5-8312-40de593c296f-hubble-tls\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988826 kubelet[3378]: I0620 18:27:29.988042 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-lib-modules\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988826 kubelet[3378]: I0620 18:27:29.988052 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-bpf-maps\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988826 kubelet[3378]: I0620 18:27:29.988065 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cni-path\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988826 kubelet[3378]: I0620 18:27:29.988093 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6242d19c-c967-435a-b3ba-42f41b4c0e7c-cilium-config-path\") pod \"6242d19c-c967-435a-b3ba-42f41b4c0e7c\" (UID: \"6242d19c-c967-435a-b3ba-42f41b4c0e7c\") " Jun 20 18:27:29.988918 kubelet[3378]: I0620 18:27:29.988103 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-xtables-lock\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988918 kubelet[3378]: I0620 18:27:29.988112 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-hostproc\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988918 kubelet[3378]: I0620 18:27:29.988123 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651ded85-ffca-47b5-8312-40de593c296f-cilium-config-path\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988918 kubelet[3378]: I0620 18:27:29.988131 3378 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cilium-cgroup\") pod \"651ded85-ffca-47b5-8312-40de593c296f\" (UID: \"651ded85-ffca-47b5-8312-40de593c296f\") " Jun 20 18:27:29.988918 kubelet[3378]: I0620 18:27:29.988172 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.988918 kubelet[3378]: I0620 18:27:29.988208 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.989028 kubelet[3378]: I0620 18:27:29.988235 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.989028 kubelet[3378]: I0620 18:27:29.988245 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.989028 kubelet[3378]: I0620 18:27:29.988253 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.990962 kubelet[3378]: I0620 18:27:29.990115 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651ded85-ffca-47b5-8312-40de593c296f-kube-api-access-pbbp9" (OuterVolumeSpecName: "kube-api-access-pbbp9") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "kube-api-access-pbbp9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:27:29.991468 kubelet[3378]: I0620 18:27:29.991449 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651ded85-ffca-47b5-8312-40de593c296f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:27:29.992423 kubelet[3378]: I0620 18:27:29.992405 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.992543 kubelet[3378]: I0620 18:27:29.992509 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.992600 kubelet[3378]: I0620 18:27:29.992512 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-hostproc" (OuterVolumeSpecName: "hostproc") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.992641 kubelet[3378]: I0620 18:27:29.992520 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.992672 kubelet[3378]: I0620 18:27:29.992527 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cni-path" (OuterVolumeSpecName: "cni-path") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:27:29.992771 kubelet[3378]: I0620 18:27:29.992756 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6242d19c-c967-435a-b3ba-42f41b4c0e7c-kube-api-access-z5mxs" (OuterVolumeSpecName: "kube-api-access-z5mxs") pod "6242d19c-c967-435a-b3ba-42f41b4c0e7c" (UID: "6242d19c-c967-435a-b3ba-42f41b4c0e7c"). InnerVolumeSpecName "kube-api-access-z5mxs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:27:29.993326 kubelet[3378]: I0620 18:27:29.993310 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6242d19c-c967-435a-b3ba-42f41b4c0e7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6242d19c-c967-435a-b3ba-42f41b4c0e7c" (UID: "6242d19c-c967-435a-b3ba-42f41b4c0e7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:27:29.994186 kubelet[3378]: I0620 18:27:29.994162 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651ded85-ffca-47b5-8312-40de593c296f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:27:29.994327 kubelet[3378]: I0620 18:27:29.994301 3378 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651ded85-ffca-47b5-8312-40de593c296f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "651ded85-ffca-47b5-8312-40de593c296f" (UID: "651ded85-ffca-47b5-8312-40de593c296f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:27:30.089265 kubelet[3378]: I0620 18:27:30.089241 3378 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cni-path\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089364 kubelet[3378]: I0620 18:27:30.089353 3378 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6242d19c-c967-435a-b3ba-42f41b4c0e7c-cilium-config-path\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089419 kubelet[3378]: I0620 18:27:30.089410 3378 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-bpf-maps\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089457 kubelet[3378]: I0620 18:27:30.089451 3378 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-hostproc\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089495 kubelet[3378]: I0620 18:27:30.089487 3378 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-xtables-lock\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089537 kubelet[3378]: I0620 18:27:30.089530 3378 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651ded85-ffca-47b5-8312-40de593c296f-cilium-config-path\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089642 kubelet[3378]: I0620 18:27:30.089563 3378 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cilium-cgroup\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089642 kubelet[3378]: I0620 18:27:30.089574 3378 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/651ded85-ffca-47b5-8312-40de593c296f-clustermesh-secrets\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089642 kubelet[3378]: I0620 18:27:30.089583 3378 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-host-proc-sys-kernel\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089642 kubelet[3378]: I0620 18:27:30.089591 3378 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-cilium-run\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089642 kubelet[3378]: I0620 18:27:30.089600 3378 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-etc-cni-netd\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089642 kubelet[3378]: I0620 18:27:30.089606 3378 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-host-proc-sys-net\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089642 kubelet[3378]: I0620 18:27:30.089613 3378 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pbbp9\" (UniqueName: \"kubernetes.io/projected/651ded85-ffca-47b5-8312-40de593c296f-kube-api-access-pbbp9\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089642 kubelet[3378]: I0620 18:27:30.089619 3378 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5mxs\" (UniqueName: \"kubernetes.io/projected/6242d19c-c967-435a-b3ba-42f41b4c0e7c-kube-api-access-z5mxs\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089774 kubelet[3378]: I0620 18:27:30.089624 3378 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/651ded85-ffca-47b5-8312-40de593c296f-lib-modules\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.089774 kubelet[3378]: I0620 18:27:30.089630 3378 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/651ded85-ffca-47b5-8312-40de593c296f-hubble-tls\") on node \"ci-4344.1.0-a-a1e4bb5c79\" DevicePath \"\"" Jun 20 18:27:30.179748 systemd[1]: Removed slice kubepods-burstable-pod651ded85_ffca_47b5_8312_40de593c296f.slice - libcontainer container kubepods-burstable-pod651ded85_ffca_47b5_8312_40de593c296f.slice. Jun 20 18:27:30.179832 systemd[1]: kubepods-burstable-pod651ded85_ffca_47b5_8312_40de593c296f.slice: Consumed 4.609s CPU time, 138.9M memory peak, 136K read from disk, 12.9M written to disk. Jun 20 18:27:30.181803 systemd[1]: Removed slice kubepods-besteffort-pod6242d19c_c967_435a_b3ba_42f41b4c0e7c.slice - libcontainer container kubepods-besteffort-pod6242d19c_c967_435a_b3ba_42f41b4c0e7c.slice. Jun 20 18:27:30.264472 kubelet[3378]: E0620 18:27:30.264389 3378 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:27:30.568621 kubelet[3378]: I0620 18:27:30.567862 3378 scope.go:117] "RemoveContainer" containerID="434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0" Jun 20 18:27:30.570220 containerd[1886]: time="2025-06-20T18:27:30.570119153Z" level=info msg="RemoveContainer for \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\"" Jun 20 18:27:30.582879 containerd[1886]: time="2025-06-20T18:27:30.582819205Z" level=info msg="RemoveContainer for \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" returns successfully" Jun 20 18:27:30.583077 kubelet[3378]: I0620 18:27:30.583058 3378 scope.go:117] "RemoveContainer" containerID="434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0" Jun 20 18:27:30.583315 containerd[1886]: time="2025-06-20T18:27:30.583217359Z" level=error msg="ContainerStatus for \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\": not found" Jun 20 18:27:30.583560 kubelet[3378]: E0620 18:27:30.583425 3378 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\": not found" containerID="434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0" Jun 20 18:27:30.583560 kubelet[3378]: I0620 18:27:30.583450 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0"} err="failed to get container status \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"434ec4da0839412b5632fb38689de2f8905b27428bcd755396d7078d245e98d0\": not found" Jun 20 18:27:30.583560 kubelet[3378]: I0620 18:27:30.583499 3378 scope.go:117] "RemoveContainer" containerID="14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9" Jun 20 18:27:30.585751 containerd[1886]: time="2025-06-20T18:27:30.585730487Z" level=info msg="RemoveContainer for \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\"" Jun 20 18:27:30.595908 containerd[1886]: time="2025-06-20T18:27:30.595885418Z" level=info msg="RemoveContainer for \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" returns successfully" Jun 20 18:27:30.596150 kubelet[3378]: I0620 18:27:30.596131 3378 scope.go:117] "RemoveContainer" containerID="2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05" Jun 20 18:27:30.597385 containerd[1886]: time="2025-06-20T18:27:30.597323191Z" level=info msg="RemoveContainer for \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\"" Jun 20 18:27:30.609101 containerd[1886]: time="2025-06-20T18:27:30.609027738Z" level=info msg="RemoveContainer for \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\" returns successfully" Jun 20 18:27:30.609536 kubelet[3378]: I0620 18:27:30.609147 3378 scope.go:117] "RemoveContainer" containerID="3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3" Jun 20 18:27:30.610939 containerd[1886]: time="2025-06-20T18:27:30.610845392Z" level=info msg="RemoveContainer for \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\"" Jun 20 18:27:30.621975 containerd[1886]: time="2025-06-20T18:27:30.621904274Z" level=info msg="RemoveContainer for \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\" returns successfully" Jun 20 18:27:30.622183 kubelet[3378]: I0620 18:27:30.622077 3378 scope.go:117] "RemoveContainer" containerID="b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893" Jun 20 18:27:30.623473 containerd[1886]: time="2025-06-20T18:27:30.623447034Z" level=info msg="RemoveContainer for \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\"" Jun 20 18:27:30.631317 containerd[1886]: time="2025-06-20T18:27:30.631294842Z" level=info msg="RemoveContainer for \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\" returns successfully" Jun 20 18:27:30.631488 kubelet[3378]: I0620 18:27:30.631426 3378 scope.go:117] "RemoveContainer" containerID="6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a" Jun 20 18:27:30.632582 containerd[1886]: time="2025-06-20T18:27:30.632564466Z" level=info msg="RemoveContainer for \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\"" Jun 20 18:27:30.642816 containerd[1886]: time="2025-06-20T18:27:30.642749358Z" level=info msg="RemoveContainer for \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\" returns successfully" Jun 20 18:27:30.642968 kubelet[3378]: I0620 18:27:30.642944 3378 scope.go:117] "RemoveContainer" containerID="14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9" Jun 20 18:27:30.643366 containerd[1886]: time="2025-06-20T18:27:30.643335397Z" level=error msg="ContainerStatus for \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\": not found" Jun 20 18:27:30.643489 kubelet[3378]: E0620 18:27:30.643465 3378 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\": not found" containerID="14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9" Jun 20 18:27:30.643523 kubelet[3378]: I0620 18:27:30.643505 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9"} err="failed to get container status \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\": rpc error: code = NotFound desc = an error occurred when try to find container \"14672f3162e6821eb108b199dd4a528351cf10ada15725434bd049c19d430ba9\": not found" Jun 20 18:27:30.643540 kubelet[3378]: I0620 18:27:30.643522 3378 scope.go:117] "RemoveContainer" containerID="2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05" Jun 20 18:27:30.643766 containerd[1886]: time="2025-06-20T18:27:30.643730039Z" level=error msg="ContainerStatus for \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\": not found" Jun 20 18:27:30.643849 kubelet[3378]: E0620 18:27:30.643831 3378 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\": not found" containerID="2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05" Jun 20 18:27:30.643870 kubelet[3378]: I0620 18:27:30.643852 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05"} err="failed to get container status \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b31f8ecbae748ea3e53f93210dd37aeecd336dd586d489ffab79153c8c78d05\": not found" Jun 20 18:27:30.643870 kubelet[3378]: I0620 18:27:30.643864 3378 scope.go:117] "RemoveContainer" containerID="3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3" Jun 20 18:27:30.644011 containerd[1886]: time="2025-06-20T18:27:30.643982630Z" level=error msg="ContainerStatus for \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\": not found" Jun 20 18:27:30.644174 kubelet[3378]: E0620 18:27:30.644157 3378 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\": not found" containerID="3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3" Jun 20 18:27:30.644206 kubelet[3378]: I0620 18:27:30.644176 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3"} err="failed to get container status \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a8d0c021d781ac17ea501f8fb38109f826dba3d181960474120fe766456a3e3\": not found" Jun 20 18:27:30.644206 kubelet[3378]: I0620 18:27:30.644189 3378 scope.go:117] "RemoveContainer" containerID="b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893" Jun 20 18:27:30.644412 containerd[1886]: time="2025-06-20T18:27:30.644389224Z" level=error msg="ContainerStatus for \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\": not found" Jun 20 18:27:30.644531 kubelet[3378]: E0620 18:27:30.644512 3378 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\": not found" containerID="b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893" Jun 20 18:27:30.644664 kubelet[3378]: I0620 18:27:30.644589 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893"} err="failed to get container status \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\": rpc error: code = NotFound desc = an error occurred when try to find container \"b05ca6b5f1df1985f8cafb1d13464a74738dfd12796a7baa900ced188d28d893\": not found" Jun 20 18:27:30.644664 kubelet[3378]: I0620 18:27:30.644606 3378 scope.go:117] "RemoveContainer" containerID="6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a" Jun 20 18:27:30.644817 containerd[1886]: time="2025-06-20T18:27:30.644745657Z" level=error msg="ContainerStatus for \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\": not found" Jun 20 18:27:30.644916 kubelet[3378]: E0620 18:27:30.644897 3378 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\": not found" containerID="6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a" Jun 20 18:27:30.644945 kubelet[3378]: I0620 18:27:30.644928 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a"} err="failed to get container status \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6509048a759320f95cf55801a1edf7cad34675be64a7142adffe635e0a37a42a\": not found" Jun 20 18:27:30.764691 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b59a839f1eaff7cb15139bb505302b30b06d179971ff316473c11e338f5fea41-shm.mount: Deactivated successfully. Jun 20 18:27:30.765138 systemd[1]: var-lib-kubelet-pods-651ded85\x2dffca\x2d47b5\x2d8312\x2d40de593c296f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpbbp9.mount: Deactivated successfully. Jun 20 18:27:30.765183 systemd[1]: var-lib-kubelet-pods-6242d19c\x2dc967\x2d435a\x2db3ba\x2d42f41b4c0e7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5mxs.mount: Deactivated successfully. Jun 20 18:27:30.765220 systemd[1]: var-lib-kubelet-pods-651ded85\x2dffca\x2d47b5\x2d8312\x2d40de593c296f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:27:30.765255 systemd[1]: var-lib-kubelet-pods-651ded85\x2dffca\x2d47b5\x2d8312\x2d40de593c296f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:27:31.746050 sshd[5044]: Connection closed by 10.200.16.10 port 56156 Jun 20 18:27:31.746626 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:31.749685 systemd-logind[1869]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:27:31.749917 systemd[1]: sshd@21-10.200.20.48:22-10.200.16.10:56156.service: Deactivated successfully. Jun 20 18:27:31.751648 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:27:31.752906 systemd-logind[1869]: Removed session 24. Jun 20 18:27:31.834351 systemd[1]: Started sshd@22-10.200.20.48:22-10.200.16.10:48822.service - OpenSSH per-connection server daemon (10.200.16.10:48822). Jun 20 18:27:32.173435 kubelet[3378]: I0620 18:27:32.173341 3378 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6242d19c-c967-435a-b3ba-42f41b4c0e7c" path="/var/lib/kubelet/pods/6242d19c-c967-435a-b3ba-42f41b4c0e7c/volumes" Jun 20 18:27:32.174044 kubelet[3378]: I0620 18:27:32.173999 3378 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651ded85-ffca-47b5-8312-40de593c296f" path="/var/lib/kubelet/pods/651ded85-ffca-47b5-8312-40de593c296f/volumes" Jun 20 18:27:32.291729 sshd[5195]: Accepted publickey for core from 10.200.16.10 port 48822 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:32.292748 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:32.297154 systemd-logind[1869]: New session 25 of user core. Jun 20 18:27:32.301116 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:27:33.173047 kubelet[3378]: I0620 18:27:33.172997 3378 memory_manager.go:355] "RemoveStaleState removing state" podUID="651ded85-ffca-47b5-8312-40de593c296f" containerName="cilium-agent" Jun 20 18:27:33.173047 kubelet[3378]: I0620 18:27:33.173036 3378 memory_manager.go:355] "RemoveStaleState removing state" podUID="6242d19c-c967-435a-b3ba-42f41b4c0e7c" containerName="cilium-operator" Jun 20 18:27:33.181250 systemd[1]: Created slice kubepods-burstable-pod1f02617b_1b77_4a76_82e5_c9e017a1b569.slice - libcontainer container kubepods-burstable-pod1f02617b_1b77_4a76_82e5_c9e017a1b569.slice. Jun 20 18:27:33.237780 sshd[5197]: Connection closed by 10.200.16.10 port 48822 Jun 20 18:27:33.238367 sshd-session[5195]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:33.241226 systemd[1]: sshd@22-10.200.20.48:22-10.200.16.10:48822.service: Deactivated successfully. Jun 20 18:27:33.243442 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:27:33.244564 systemd-logind[1869]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:27:33.246229 systemd-logind[1869]: Removed session 25. Jun 20 18:27:33.306921 kubelet[3378]: I0620 18:27:33.306892 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-cilium-run\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307302 kubelet[3378]: I0620 18:27:33.307230 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-bpf-maps\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307302 kubelet[3378]: I0620 18:27:33.307261 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-host-proc-sys-net\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307302 kubelet[3378]: I0620 18:27:33.307274 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-cni-path\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307302 kubelet[3378]: I0620 18:27:33.307285 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-etc-cni-netd\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307541 kubelet[3378]: I0620 18:27:33.307314 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-hostproc\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307541 kubelet[3378]: I0620 18:27:33.307348 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-cilium-cgroup\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307541 kubelet[3378]: I0620 18:27:33.307373 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f02617b-1b77-4a76-82e5-c9e017a1b569-clustermesh-secrets\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307541 kubelet[3378]: I0620 18:27:33.307387 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-lib-modules\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307541 kubelet[3378]: I0620 18:27:33.307402 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-xtables-lock\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307541 kubelet[3378]: I0620 18:27:33.307412 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f02617b-1b77-4a76-82e5-c9e017a1b569-host-proc-sys-kernel\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307654 kubelet[3378]: I0620 18:27:33.307446 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f02617b-1b77-4a76-82e5-c9e017a1b569-cilium-config-path\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307654 kubelet[3378]: I0620 18:27:33.307469 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6dc\" (UniqueName: \"kubernetes.io/projected/1f02617b-1b77-4a76-82e5-c9e017a1b569-kube-api-access-ql6dc\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307654 kubelet[3378]: I0620 18:27:33.307496 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1f02617b-1b77-4a76-82e5-c9e017a1b569-cilium-ipsec-secrets\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.307654 kubelet[3378]: I0620 18:27:33.307508 3378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f02617b-1b77-4a76-82e5-c9e017a1b569-hubble-tls\") pod \"cilium-2mb8k\" (UID: \"1f02617b-1b77-4a76-82e5-c9e017a1b569\") " pod="kube-system/cilium-2mb8k" Jun 20 18:27:33.320843 systemd[1]: Started sshd@23-10.200.20.48:22-10.200.16.10:48838.service - OpenSSH per-connection server daemon (10.200.16.10:48838). Jun 20 18:27:33.484424 containerd[1886]: time="2025-06-20T18:27:33.484379775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mb8k,Uid:1f02617b-1b77-4a76-82e5-c9e017a1b569,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:33.515799 containerd[1886]: time="2025-06-20T18:27:33.515757212Z" level=info msg="connecting to shim 7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1" address="unix:///run/containerd/s/7bf9862122fa4c6a7ecbaecfc3b875390876512174823032d5ae5212e46e35ce" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:33.537131 systemd[1]: Started cri-containerd-7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1.scope - libcontainer container 7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1. Jun 20 18:27:33.557924 containerd[1886]: time="2025-06-20T18:27:33.557887974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mb8k,Uid:1f02617b-1b77-4a76-82e5-c9e017a1b569,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\"" Jun 20 18:27:33.561612 containerd[1886]: time="2025-06-20T18:27:33.561285026Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:27:33.581020 containerd[1886]: time="2025-06-20T18:27:33.580976094Z" level=info msg="Container abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:33.594835 containerd[1886]: time="2025-06-20T18:27:33.594804453Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0\"" Jun 20 18:27:33.595236 containerd[1886]: time="2025-06-20T18:27:33.595214153Z" level=info msg="StartContainer for \"abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0\"" Jun 20 18:27:33.596401 containerd[1886]: time="2025-06-20T18:27:33.596374892Z" level=info msg="connecting to shim abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0" address="unix:///run/containerd/s/7bf9862122fa4c6a7ecbaecfc3b875390876512174823032d5ae5212e46e35ce" protocol=ttrpc version=3 Jun 20 18:27:33.614116 systemd[1]: Started cri-containerd-abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0.scope - libcontainer container abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0. Jun 20 18:27:33.637578 containerd[1886]: time="2025-06-20T18:27:33.637518008Z" level=info msg="StartContainer for \"abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0\" returns successfully" Jun 20 18:27:33.639825 systemd[1]: cri-containerd-abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0.scope: Deactivated successfully. Jun 20 18:27:33.641483 containerd[1886]: time="2025-06-20T18:27:33.641437852Z" level=info msg="received exit event container_id:\"abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0\" id:\"abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0\" pid:5272 exited_at:{seconds:1750444053 nanos:641239934}" Jun 20 18:27:33.642199 containerd[1886]: time="2025-06-20T18:27:33.642095551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0\" id:\"abf747645555a83951c6cb0c255307fb336b555ec6f3f4c492617b2f4dbaf6b0\" pid:5272 exited_at:{seconds:1750444053 nanos:641239934}" Jun 20 18:27:33.772411 sshd[5207]: Accepted publickey for core from 10.200.16.10 port 48838 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:33.773428 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:33.777232 systemd-logind[1869]: New session 26 of user core. Jun 20 18:27:33.781119 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:27:34.096250 sshd[5307]: Connection closed by 10.200.16.10 port 48838 Jun 20 18:27:34.095537 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:34.098877 systemd[1]: sshd@23-10.200.20.48:22-10.200.16.10:48838.service: Deactivated successfully. Jun 20 18:27:34.100334 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:27:34.100953 systemd-logind[1869]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:27:34.102179 systemd-logind[1869]: Removed session 26. Jun 20 18:27:34.170890 kubelet[3378]: I0620 18:27:34.170841 3378 setters.go:602] "Node became not ready" node="ci-4344.1.0-a-a1e4bb5c79" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T18:27:34Z","lastTransitionTime":"2025-06-20T18:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 18:27:34.178391 systemd[1]: Started sshd@24-10.200.20.48:22-10.200.16.10:48840.service - OpenSSH per-connection server daemon (10.200.16.10:48840). Jun 20 18:27:34.589268 containerd[1886]: time="2025-06-20T18:27:34.589227908Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:27:34.612593 containerd[1886]: time="2025-06-20T18:27:34.612557842Z" level=info msg="Container b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:34.616361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185833639.mount: Deactivated successfully. Jun 20 18:27:34.626263 containerd[1886]: time="2025-06-20T18:27:34.626230908Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78\"" Jun 20 18:27:34.627251 containerd[1886]: time="2025-06-20T18:27:34.627225724Z" level=info msg="StartContainer for \"b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78\"" Jun 20 18:27:34.628511 containerd[1886]: time="2025-06-20T18:27:34.628488580Z" level=info msg="connecting to shim b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78" address="unix:///run/containerd/s/7bf9862122fa4c6a7ecbaecfc3b875390876512174823032d5ae5212e46e35ce" protocol=ttrpc version=3 Jun 20 18:27:34.638619 sshd[5314]: Accepted publickey for core from 10.200.16.10 port 48840 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:27:34.640805 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:27:34.646143 systemd[1]: Started cri-containerd-b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78.scope - libcontainer container b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78. Jun 20 18:27:34.654633 systemd-logind[1869]: New session 27 of user core. Jun 20 18:27:34.659123 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:27:34.678755 containerd[1886]: time="2025-06-20T18:27:34.678728806Z" level=info msg="StartContainer for \"b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78\" returns successfully" Jun 20 18:27:34.680729 systemd[1]: cri-containerd-b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78.scope: Deactivated successfully. Jun 20 18:27:34.681873 containerd[1886]: time="2025-06-20T18:27:34.681845537Z" level=info msg="received exit event container_id:\"b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78\" id:\"b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78\" pid:5330 exited_at:{seconds:1750444054 nanos:681671204}" Jun 20 18:27:34.682184 containerd[1886]: time="2025-06-20T18:27:34.682154731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78\" id:\"b1d7a34122b1da3e30a950887b7ff968d6fecd5339537fa441e67631dc482d78\" pid:5330 exited_at:{seconds:1750444054 nanos:681671204}" Jun 20 18:27:35.265592 kubelet[3378]: E0620 18:27:35.265546 3378 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:27:35.598720 containerd[1886]: time="2025-06-20T18:27:35.597286803Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:27:35.620397 containerd[1886]: time="2025-06-20T18:27:35.620371471Z" level=info msg="Container fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:35.640837 containerd[1886]: time="2025-06-20T18:27:35.640806864Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4\"" Jun 20 18:27:35.641485 containerd[1886]: time="2025-06-20T18:27:35.641461845Z" level=info msg="StartContainer for \"fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4\"" Jun 20 18:27:35.643223 containerd[1886]: time="2025-06-20T18:27:35.643199468Z" level=info msg="connecting to shim fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4" address="unix:///run/containerd/s/7bf9862122fa4c6a7ecbaecfc3b875390876512174823032d5ae5212e46e35ce" protocol=ttrpc version=3 Jun 20 18:27:35.661135 systemd[1]: Started cri-containerd-fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4.scope - libcontainer container fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4. Jun 20 18:27:35.684230 systemd[1]: cri-containerd-fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4.scope: Deactivated successfully. Jun 20 18:27:35.685334 containerd[1886]: time="2025-06-20T18:27:35.685307572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4\" id:\"fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4\" pid:5380 exited_at:{seconds:1750444055 nanos:684984562}" Jun 20 18:27:35.686201 containerd[1886]: time="2025-06-20T18:27:35.685979794Z" level=info msg="received exit event container_id:\"fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4\" id:\"fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4\" pid:5380 exited_at:{seconds:1750444055 nanos:684984562}" Jun 20 18:27:35.687066 containerd[1886]: time="2025-06-20T18:27:35.686993314Z" level=info msg="StartContainer for \"fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4\" returns successfully" Jun 20 18:27:35.703879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd74116ca030019fd4ec409fe01469be6845f133b57fa1db83e699dd06b376a4-rootfs.mount: Deactivated successfully. Jun 20 18:27:36.598773 containerd[1886]: time="2025-06-20T18:27:36.598697524Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:27:36.620303 containerd[1886]: time="2025-06-20T18:27:36.619885708Z" level=info msg="Container 370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:36.637617 containerd[1886]: time="2025-06-20T18:27:36.637579262Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2\"" Jun 20 18:27:36.638842 containerd[1886]: time="2025-06-20T18:27:36.638785436Z" level=info msg="StartContainer for \"370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2\"" Jun 20 18:27:36.639840 containerd[1886]: time="2025-06-20T18:27:36.639817989Z" level=info msg="connecting to shim 370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2" address="unix:///run/containerd/s/7bf9862122fa4c6a7ecbaecfc3b875390876512174823032d5ae5212e46e35ce" protocol=ttrpc version=3 Jun 20 18:27:36.659123 systemd[1]: Started cri-containerd-370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2.scope - libcontainer container 370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2. Jun 20 18:27:36.678097 systemd[1]: cri-containerd-370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2.scope: Deactivated successfully. Jun 20 18:27:36.678971 containerd[1886]: time="2025-06-20T18:27:36.678925182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2\" id:\"370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2\" pid:5419 exited_at:{seconds:1750444056 nanos:678546250}" Jun 20 18:27:36.682451 containerd[1886]: time="2025-06-20T18:27:36.682422509Z" level=info msg="received exit event container_id:\"370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2\" id:\"370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2\" pid:5419 exited_at:{seconds:1750444056 nanos:678546250}" Jun 20 18:27:36.687724 containerd[1886]: time="2025-06-20T18:27:36.687697341Z" level=info msg="StartContainer for \"370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2\" returns successfully" Jun 20 18:27:36.698201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-370fd75e44b2aac0c3757ed5145cd79aeeb163017b5966cb825e72ed8e4216f2-rootfs.mount: Deactivated successfully. Jun 20 18:27:37.604578 containerd[1886]: time="2025-06-20T18:27:37.604207599Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:27:37.630578 containerd[1886]: time="2025-06-20T18:27:37.630109326Z" level=info msg="Container a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:37.644648 containerd[1886]: time="2025-06-20T18:27:37.644616130Z" level=info msg="CreateContainer within sandbox \"7e50c420391ac3514ab3e136c23b69571cc97c2cfe235c23322b483ceb8420b1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\"" Jun 20 18:27:37.645422 containerd[1886]: time="2025-06-20T18:27:37.645407003Z" level=info msg="StartContainer for \"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\"" Jun 20 18:27:37.646538 containerd[1886]: time="2025-06-20T18:27:37.646517150Z" level=info msg="connecting to shim a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce" address="unix:///run/containerd/s/7bf9862122fa4c6a7ecbaecfc3b875390876512174823032d5ae5212e46e35ce" protocol=ttrpc version=3 Jun 20 18:27:37.664119 systemd[1]: Started cri-containerd-a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce.scope - libcontainer container a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce. Jun 20 18:27:37.692770 containerd[1886]: time="2025-06-20T18:27:37.692742842Z" level=info msg="StartContainer for \"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\" returns successfully" Jun 20 18:27:37.743742 containerd[1886]: time="2025-06-20T18:27:37.743708139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\" id:\"605be6dc268c60ee6edf560b7aa29fb7cdd2ac8aa8e26bbd8577266a00ddb8f2\" pid:5487 exited_at:{seconds:1750444057 nanos:743306391}" Jun 20 18:27:37.996093 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jun 20 18:27:38.623125 kubelet[3378]: I0620 18:27:38.622808 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2mb8k" podStartSLOduration=5.62279357 podStartE2EDuration="5.62279357s" podCreationTimestamp="2025-06-20 18:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:38.622749689 +0000 UTC m=+218.726439299" watchObservedRunningTime="2025-06-20 18:27:38.62279357 +0000 UTC m=+218.726483188" Jun 20 18:27:39.023310 containerd[1886]: time="2025-06-20T18:27:39.023269937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\" id:\"e059138f8142d4f4bee3af5c7778cdf663ba15082e897dbf5daa14ab00a834b5\" pid:5565 exit_status:1 exited_at:{seconds:1750444059 nanos:22828331}" Jun 20 18:27:40.381097 systemd-networkd[1482]: lxc_health: Link UP Jun 20 18:27:40.381243 systemd-networkd[1482]: lxc_health: Gained carrier Jun 20 18:27:41.127664 containerd[1886]: time="2025-06-20T18:27:41.127617331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\" id:\"374f06da8739d5301aa242a9ebffc0eca090cdd7b2f1a71c8d7874ef080de3c6\" pid:6019 exited_at:{seconds:1750444061 nanos:127311050}" Jun 20 18:27:42.372222 systemd-networkd[1482]: lxc_health: Gained IPv6LL Jun 20 18:27:43.202148 containerd[1886]: time="2025-06-20T18:27:43.202100980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\" id:\"48cddca804af071cad22fa811fda115aab85d673ee7751d5ff173e6a63b44353\" pid:6057 exited_at:{seconds:1750444063 nanos:201577700}" Jun 20 18:27:45.277400 containerd[1886]: time="2025-06-20T18:27:45.277341187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\" id:\"68642074eebe761ae24566e0ac42a52365e7254046a836e4171a2591bedc99ff\" pid:6079 exited_at:{seconds:1750444065 nanos:277031594}" Jun 20 18:27:47.348583 containerd[1886]: time="2025-06-20T18:27:47.348542161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a17cf1ceb80f495e9a8b60e7af88a53b4e21a728e39409a36f6fedda622da1ce\" id:\"137e3e98d2be341c9d44a6a541c259808e5e5231210de2f57ae0ccd558f2d76a\" pid:6102 exited_at:{seconds:1750444067 nanos:348292682}" Jun 20 18:27:47.438706 sshd[5337]: Connection closed by 10.200.16.10 port 48840 Jun 20 18:27:47.439288 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:47.442713 systemd-logind[1869]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:27:47.443526 systemd[1]: sshd@24-10.200.20.48:22-10.200.16.10:48840.service: Deactivated successfully. Jun 20 18:27:47.445825 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:27:47.449262 systemd-logind[1869]: Removed session 27.