Jun 20 18:25:09.023014 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jun 20 18:25:09.023032 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jun 20 16:58:52 -00 2025 Jun 20 18:25:09.023038 kernel: KASLR enabled Jun 20 18:25:09.023043 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 20 18:25:09.023047 kernel: printk: legacy bootconsole [pl11] enabled Jun 20 18:25:09.023051 kernel: efi: EFI v2.7 by EDK II Jun 20 18:25:09.023056 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20d018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Jun 20 18:25:09.023060 kernel: random: crng init done Jun 20 18:25:09.023064 kernel: secureboot: Secure boot disabled Jun 20 18:25:09.023068 kernel: ACPI: Early table checksum verification disabled Jun 20 18:25:09.023072 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jun 20 18:25:09.023076 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023080 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023085 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 18:25:09.023090 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023094 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023098 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023103 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023108 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023112 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023116 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 20 18:25:09.023120 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:09.023124 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 20 18:25:09.023128 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 20 18:25:09.023132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 20 18:25:09.023137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jun 20 18:25:09.023141 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jun 20 18:25:09.023145 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 20 18:25:09.023149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 20 18:25:09.023154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 20 18:25:09.023159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 20 18:25:09.023163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 20 18:25:09.023167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 20 18:25:09.023171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 20 18:25:09.023175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 20 18:25:09.023179 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 20 18:25:09.023183 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jun 20 18:25:09.023188 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jun 20 18:25:09.023192 kernel: Zone ranges: Jun 20 18:25:09.023196 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 20 18:25:09.023203 kernel: DMA32 empty Jun 20 18:25:09.023207 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:25:09.023212 kernel: Device empty Jun 20 18:25:09.023216 kernel: Movable zone start for each node Jun 20 18:25:09.023220 kernel: Early memory node ranges Jun 20 18:25:09.023225 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 20 18:25:09.023230 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jun 20 18:25:09.023234 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jun 20 18:25:09.023238 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jun 20 18:25:09.023243 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jun 20 18:25:09.023247 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jun 20 18:25:09.023251 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jun 20 18:25:09.023255 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jun 20 18:25:09.023260 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:25:09.023264 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 20 18:25:09.023268 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 20 18:25:09.023272 kernel: psci: probing for conduit method from ACPI. Jun 20 18:25:09.023288 kernel: psci: PSCIv1.1 detected in firmware. Jun 20 18:25:09.023292 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 18:25:09.023297 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 20 18:25:09.023301 kernel: psci: SMC Calling Convention v1.4 Jun 20 18:25:09.023305 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 20 18:25:09.023309 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 20 18:25:09.023314 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 20 18:25:09.023318 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 20 18:25:09.023322 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 18:25:09.023327 kernel: Detected PIPT I-cache on CPU0 Jun 20 18:25:09.023331 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jun 20 18:25:09.023337 kernel: CPU features: detected: GIC system register CPU interface Jun 20 18:25:09.023341 kernel: CPU features: detected: Spectre-v4 Jun 20 18:25:09.023345 kernel: CPU features: detected: Spectre-BHB Jun 20 18:25:09.023349 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 20 18:25:09.023354 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 20 18:25:09.023358 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jun 20 18:25:09.023362 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 20 18:25:09.023367 kernel: alternatives: applying boot alternatives Jun 20 18:25:09.023372 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:25:09.023376 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:25:09.023381 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:25:09.023386 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:25:09.023391 kernel: Fallback order for Node 0: 0 Jun 20 18:25:09.023395 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jun 20 18:25:09.023399 kernel: Policy zone: Normal Jun 20 18:25:09.023404 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:25:09.023408 kernel: software IO TLB: area num 2. Jun 20 18:25:09.023412 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jun 20 18:25:09.023417 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:25:09.023421 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:25:09.023426 kernel: rcu: RCU event tracing is enabled. Jun 20 18:25:09.023430 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:25:09.023436 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:25:09.023440 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:25:09.023444 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:25:09.023449 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:25:09.023453 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:25:09.023458 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:25:09.023462 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 18:25:09.023466 kernel: GICv3: 960 SPIs implemented Jun 20 18:25:09.023470 kernel: GICv3: 0 Extended SPIs implemented Jun 20 18:25:09.023475 kernel: Root IRQ handler: gic_handle_irq Jun 20 18:25:09.023479 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jun 20 18:25:09.023483 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jun 20 18:25:09.023489 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 20 18:25:09.023493 kernel: ITS: No ITS available, not enabling LPIs Jun 20 18:25:09.023497 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:25:09.023502 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jun 20 18:25:09.023506 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 18:25:09.023510 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jun 20 18:25:09.023515 kernel: Console: colour dummy device 80x25 Jun 20 18:25:09.023519 kernel: printk: legacy console [tty1] enabled Jun 20 18:25:09.023524 kernel: ACPI: Core revision 20240827 Jun 20 18:25:09.023529 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jun 20 18:25:09.023534 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:25:09.023539 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 18:25:09.023543 kernel: landlock: Up and running. Jun 20 18:25:09.023547 kernel: SELinux: Initializing. Jun 20 18:25:09.023552 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:25:09.023556 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:25:09.023564 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jun 20 18:25:09.023570 kernel: Hyper-V: Host Build 10.0.26100.1255-1-0 Jun 20 18:25:09.023575 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 18:25:09.023579 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:25:09.023584 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:25:09.023589 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 18:25:09.023594 kernel: Remapping and enabling EFI services. Jun 20 18:25:09.023599 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:25:09.023604 kernel: Detected PIPT I-cache on CPU1 Jun 20 18:25:09.023609 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 20 18:25:09.023613 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jun 20 18:25:09.023619 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:25:09.023623 kernel: SMP: Total of 2 processors activated. Jun 20 18:25:09.023628 kernel: CPU: All CPU(s) started at EL1 Jun 20 18:25:09.023633 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 18:25:09.023638 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 20 18:25:09.023642 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 20 18:25:09.023647 kernel: CPU features: detected: Common not Private translations Jun 20 18:25:09.023652 kernel: CPU features: detected: CRC32 instructions Jun 20 18:25:09.023656 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jun 20 18:25:09.023662 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 20 18:25:09.023667 kernel: CPU features: detected: LSE atomic instructions Jun 20 18:25:09.023671 kernel: CPU features: detected: Privileged Access Never Jun 20 18:25:09.023676 kernel: CPU features: detected: Speculation barrier (SB) Jun 20 18:25:09.023681 kernel: CPU features: detected: TLB range maintenance instructions Jun 20 18:25:09.023685 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 20 18:25:09.023690 kernel: CPU features: detected: Scalable Vector Extension Jun 20 18:25:09.023695 kernel: alternatives: applying system-wide alternatives Jun 20 18:25:09.023699 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jun 20 18:25:09.023705 kernel: SVE: maximum available vector length 16 bytes per vector Jun 20 18:25:09.023710 kernel: SVE: default vector length 16 bytes per vector Jun 20 18:25:09.023714 kernel: Memory: 3976112K/4194160K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 213432K reserved, 0K cma-reserved) Jun 20 18:25:09.023719 kernel: devtmpfs: initialized Jun 20 18:25:09.023724 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:25:09.023729 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:25:09.023733 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 20 18:25:09.023738 kernel: 0 pages in range for non-PLT usage Jun 20 18:25:09.023743 kernel: 508544 pages in range for PLT usage Jun 20 18:25:09.023748 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:25:09.023753 kernel: SMBIOS 3.1.0 present. Jun 20 18:25:09.023758 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jun 20 18:25:09.023762 kernel: DMI: Memory slots populated: 2/2 Jun 20 18:25:09.023767 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:25:09.023772 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 18:25:09.023776 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 18:25:09.023781 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 18:25:09.023786 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:25:09.023791 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jun 20 18:25:09.023796 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:25:09.023801 kernel: cpuidle: using governor menu Jun 20 18:25:09.023805 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 18:25:09.023810 kernel: ASID allocator initialised with 32768 entries Jun 20 18:25:09.023815 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:25:09.023819 kernel: Serial: AMBA PL011 UART driver Jun 20 18:25:09.023824 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:25:09.023829 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:25:09.023834 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 18:25:09.023839 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 18:25:09.023844 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:25:09.023848 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:25:09.023853 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 18:25:09.023858 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 18:25:09.023862 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:25:09.023867 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:25:09.023872 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:25:09.023877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:25:09.023882 kernel: ACPI: Interpreter enabled Jun 20 18:25:09.023887 kernel: ACPI: Using GIC for interrupt routing Jun 20 18:25:09.023891 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 20 18:25:09.023896 kernel: printk: legacy console [ttyAMA0] enabled Jun 20 18:25:09.023901 kernel: printk: legacy bootconsole [pl11] disabled Jun 20 18:25:09.023905 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 20 18:25:09.023910 kernel: ACPI: CPU0 has been hot-added Jun 20 18:25:09.023915 kernel: ACPI: CPU1 has been hot-added Jun 20 18:25:09.023920 kernel: iommu: Default domain type: Translated Jun 20 18:25:09.023925 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 18:25:09.023929 kernel: efivars: Registered efivars operations Jun 20 18:25:09.023934 kernel: vgaarb: loaded Jun 20 18:25:09.023939 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 18:25:09.023943 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:25:09.023948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:25:09.023953 kernel: pnp: PnP ACPI init Jun 20 18:25:09.023957 kernel: pnp: PnP ACPI: found 0 devices Jun 20 18:25:09.023963 kernel: NET: Registered PF_INET protocol family Jun 20 18:25:09.023968 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:25:09.023972 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 18:25:09.023977 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:25:09.023982 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:25:09.023987 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 18:25:09.023991 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 18:25:09.023996 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:25:09.024001 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:25:09.024007 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:25:09.024011 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:25:09.024016 kernel: kvm [1]: HYP mode not available Jun 20 18:25:09.024020 kernel: Initialise system trusted keyrings Jun 20 18:25:09.024025 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 18:25:09.024030 kernel: Key type asymmetric registered Jun 20 18:25:09.024034 kernel: Asymmetric key parser 'x509' registered Jun 20 18:25:09.024039 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 20 18:25:09.024044 kernel: io scheduler mq-deadline registered Jun 20 18:25:09.024049 kernel: io scheduler kyber registered Jun 20 18:25:09.024054 kernel: io scheduler bfq registered Jun 20 18:25:09.024059 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:25:09.024063 kernel: thunder_xcv, ver 1.0 Jun 20 18:25:09.024068 kernel: thunder_bgx, ver 1.0 Jun 20 18:25:09.024072 kernel: nicpf, ver 1.0 Jun 20 18:25:09.024077 kernel: nicvf, ver 1.0 Jun 20 18:25:09.024174 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 18:25:09.024225 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T18:25:08 UTC (1750443908) Jun 20 18:25:09.024232 kernel: efifb: probing for efifb Jun 20 18:25:09.024237 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 18:25:09.024241 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 18:25:09.024246 kernel: efifb: scrolling: redraw Jun 20 18:25:09.024251 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:25:09.024255 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:25:09.024260 kernel: fb0: EFI VGA frame buffer device Jun 20 18:25:09.024265 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 20 18:25:09.024271 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:25:09.024275 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jun 20 18:25:09.024288 kernel: watchdog: NMI not fully supported Jun 20 18:25:09.024293 kernel: watchdog: Hard watchdog permanently disabled Jun 20 18:25:09.024297 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:25:09.024302 kernel: Segment Routing with IPv6 Jun 20 18:25:09.024307 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:25:09.024311 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:25:09.024316 kernel: Key type dns_resolver registered Jun 20 18:25:09.024322 kernel: registered taskstats version 1 Jun 20 18:25:09.024327 kernel: Loading compiled-in X.509 certificates Jun 20 18:25:09.024332 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 4dab98fc4de70d482d00f54d1877f6231fc25377' Jun 20 18:25:09.024336 kernel: Demotion targets for Node 0: null Jun 20 18:25:09.024341 kernel: Key type .fscrypt registered Jun 20 18:25:09.024345 kernel: Key type fscrypt-provisioning registered Jun 20 18:25:09.024350 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:25:09.024355 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:25:09.024360 kernel: ima: No architecture policies found Jun 20 18:25:09.024365 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 18:25:09.024370 kernel: clk: Disabling unused clocks Jun 20 18:25:09.024375 kernel: PM: genpd: Disabling unused power domains Jun 20 18:25:09.024379 kernel: Warning: unable to open an initial console. Jun 20 18:25:09.024384 kernel: Freeing unused kernel memory: 39424K Jun 20 18:25:09.024389 kernel: Run /init as init process Jun 20 18:25:09.024393 kernel: with arguments: Jun 20 18:25:09.024398 kernel: /init Jun 20 18:25:09.024402 kernel: with environment: Jun 20 18:25:09.024408 kernel: HOME=/ Jun 20 18:25:09.024413 kernel: TERM=linux Jun 20 18:25:09.024417 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:25:09.024423 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:25:09.024430 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:25:09.024435 systemd[1]: Detected virtualization microsoft. Jun 20 18:25:09.024440 systemd[1]: Detected architecture arm64. Jun 20 18:25:09.024446 systemd[1]: Running in initrd. Jun 20 18:25:09.024451 systemd[1]: No hostname configured, using default hostname. Jun 20 18:25:09.024457 systemd[1]: Hostname set to . Jun 20 18:25:09.024462 systemd[1]: Initializing machine ID from random generator. Jun 20 18:25:09.024467 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:25:09.024472 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:25:09.024477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:25:09.024483 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:25:09.024489 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:25:09.024494 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:25:09.024500 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:25:09.024506 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:25:09.024511 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:25:09.024516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:25:09.024521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:25:09.024527 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:25:09.024532 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:25:09.024537 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:25:09.024543 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:25:09.024548 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:25:09.024553 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:25:09.024558 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:25:09.024563 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:25:09.024568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:25:09.024574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:25:09.024580 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:25:09.024585 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:25:09.024590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:25:09.024595 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:25:09.024600 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:25:09.024606 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 18:25:09.024611 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:25:09.024617 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:25:09.024622 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:25:09.024637 systemd-journald[224]: Collecting audit messages is disabled. Jun 20 18:25:09.024651 systemd-journald[224]: Journal started Jun 20 18:25:09.024665 systemd-journald[224]: Runtime Journal (/run/log/journal/d28564869a8c47d79365c2fff1a8ddc9) is 8M, max 78.5M, 70.5M free. Jun 20 18:25:09.032310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:09.036970 systemd-modules-load[226]: Inserted module 'overlay' Jun 20 18:25:09.054404 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:25:09.054440 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:25:09.063025 kernel: Bridge firewalling registered Jun 20 18:25:09.063313 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:25:09.064757 systemd-modules-load[226]: Inserted module 'br_netfilter' Jun 20 18:25:09.076570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:25:09.083602 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:25:09.091363 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:25:09.098660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:09.108619 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:25:09.123686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:25:09.131612 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:25:09.146689 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:25:09.160445 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:25:09.169346 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:25:09.171275 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 18:25:09.186690 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:25:09.191899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:25:09.203869 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:25:09.222968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:25:09.233442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:25:09.248789 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:25:09.275689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:25:09.288975 systemd-resolved[264]: Positive Trust Anchors: Jun 20 18:25:09.288990 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:25:09.289009 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:25:09.290941 systemd-resolved[264]: Defaulting to hostname 'linux'. Jun 20 18:25:09.293034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:25:09.328202 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:25:09.390292 kernel: SCSI subsystem initialized Jun 20 18:25:09.396291 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:25:09.404405 kernel: iscsi: registered transport (tcp) Jun 20 18:25:09.416533 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:25:09.416544 kernel: QLogic iSCSI HBA Driver Jun 20 18:25:09.430222 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:25:09.445556 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:25:09.451521 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:25:09.497479 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:25:09.504433 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:25:09.565293 kernel: raid6: neonx8 gen() 18556 MB/s Jun 20 18:25:09.584285 kernel: raid6: neonx4 gen() 18565 MB/s Jun 20 18:25:09.603284 kernel: raid6: neonx2 gen() 17093 MB/s Jun 20 18:25:09.623369 kernel: raid6: neonx1 gen() 15045 MB/s Jun 20 18:25:09.642364 kernel: raid6: int64x8 gen() 10546 MB/s Jun 20 18:25:09.661362 kernel: raid6: int64x4 gen() 10612 MB/s Jun 20 18:25:09.681361 kernel: raid6: int64x2 gen() 8979 MB/s Jun 20 18:25:09.702441 kernel: raid6: int64x1 gen() 7020 MB/s Jun 20 18:25:09.702487 kernel: raid6: using algorithm neonx4 gen() 18565 MB/s Jun 20 18:25:09.724471 kernel: raid6: .... xor() 15132 MB/s, rmw enabled Jun 20 18:25:09.724505 kernel: raid6: using neon recovery algorithm Jun 20 18:25:09.732059 kernel: xor: measuring software checksum speed Jun 20 18:25:09.732067 kernel: 8regs : 28627 MB/sec Jun 20 18:25:09.736731 kernel: 32regs : 27806 MB/sec Jun 20 18:25:09.736739 kernel: arm64_neon : 37680 MB/sec Jun 20 18:25:09.740367 kernel: xor: using function: arm64_neon (37680 MB/sec) Jun 20 18:25:09.778296 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:25:09.782521 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:25:09.792392 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:25:09.824138 systemd-udevd[475]: Using default interface naming scheme 'v255'. Jun 20 18:25:09.829885 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:25:09.842078 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:25:09.865259 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Jun 20 18:25:09.883647 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:25:09.894855 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:25:09.934079 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:25:09.943659 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:25:10.000309 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 18:25:10.018558 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:10.037558 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 18:25:10.037582 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 18:25:10.037590 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 18:25:10.037596 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jun 20 18:25:10.037603 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 18:25:10.018961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:10.057061 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 18:25:10.057076 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 18:25:10.048475 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:10.074456 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jun 20 18:25:10.064822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:10.085842 kernel: PTP clock support registered Jun 20 18:25:10.085855 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 18:25:10.097209 kernel: scsi host0: storvsc_host_t Jun 20 18:25:10.097370 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 20 18:25:10.099729 kernel: scsi host1: storvsc_host_t Jun 20 18:25:10.100290 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 18:25:10.112592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:10.115314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:10.132621 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 18:25:10.132636 kernel: hv_vmbus: registering driver hv_utils Jun 20 18:25:10.138877 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 18:25:10.138907 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 18:25:10.138922 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 18:25:10.134094 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:25:09.707841 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 20 18:25:09.714519 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 20 18:25:09.714627 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:25:09.714691 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 20 18:25:09.714753 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 20 18:25:09.715204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#5 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:09.715274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:09.715335 kernel: hv_netvsc 000d3afc-60ef-000d-3afc-60ef000d3afc eth0: VF slot 1 added Jun 20 18:25:09.715397 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:25:09.715403 systemd-journald[224]: Time jumped backwards, rotating. Jun 20 18:25:09.715431 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:25:10.138477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:09.728698 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 18:25:09.728838 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:25:09.728846 kernel: hv_vmbus: registering driver hv_pci Jun 20 18:25:09.660511 systemd-resolved[264]: Clock change detected. Flushing caches. Jun 20 18:25:09.740515 kernel: hv_pci 39427714-85e9-4079-bba5-f3997dde4c63: PCI VMBus probing: Using version 0x10004 Jun 20 18:25:09.740640 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 18:25:09.752091 kernel: hv_pci 39427714-85e9-4079-bba5-f3997dde4c63: PCI host bridge to bus 85e9:00 Jun 20 18:25:09.752358 kernel: pci_bus 85e9:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 20 18:25:09.752471 kernel: pci_bus 85e9:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 18:25:09.759090 kernel: pci 85e9:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jun 20 18:25:09.763479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:09.776259 kernel: pci 85e9:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 20 18:25:09.786940 kernel: pci 85e9:00:02.0: enabling Extended Tags Jun 20 18:25:09.786979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#44 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:25:09.801085 kernel: pci 85e9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 85e9:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jun 20 18:25:09.810775 kernel: pci_bus 85e9:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 18:25:09.810906 kernel: pci 85e9:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jun 20 18:25:09.824096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#214 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:25:09.875044 kernel: mlx5_core 85e9:00:02.0: enabling device (0000 -> 0002) Jun 20 18:25:09.882647 kernel: mlx5_core 85e9:00:02.0: PTM is not supported by PCIe Jun 20 18:25:09.882795 kernel: mlx5_core 85e9:00:02.0: firmware version: 16.30.5006 Jun 20 18:25:10.047761 kernel: hv_netvsc 000d3afc-60ef-000d-3afc-60ef000d3afc eth0: VF registering: eth1 Jun 20 18:25:10.047984 kernel: mlx5_core 85e9:00:02.0 eth1: joined to eth0 Jun 20 18:25:10.053081 kernel: mlx5_core 85e9:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 20 18:25:10.061107 kernel: mlx5_core 85e9:00:02.0 enP34281s1: renamed from eth1 Jun 20 18:25:10.872754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:25:10.923218 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 20 18:25:11.003417 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 20 18:25:11.090614 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 20 18:25:11.095857 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 20 18:25:11.108115 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:25:11.117902 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:25:11.126929 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:25:11.136913 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:25:11.146366 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:25:11.171666 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:25:11.190914 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:25:11.200232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#214 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:11.204117 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:25:12.222520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#233 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:12.237095 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:25:12.237443 disk-uuid[669]: The operation has completed successfully. Jun 20 18:25:12.297539 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:25:12.301001 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:25:12.330850 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:25:12.351967 sh[828]: Success Jun 20 18:25:12.408488 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:25:12.408521 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:25:12.412801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 18:25:12.421100 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 20 18:25:12.822484 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:25:12.830165 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:25:12.851550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:25:12.861949 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 18:25:12.861965 kernel: BTRFS: device fsid eac9c4a0-5098-4f12-a7ad-af09956ff0e3 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (845) Jun 20 18:25:12.875768 kernel: BTRFS info (device dm-0): first mount of filesystem eac9c4a0-5098-4f12-a7ad-af09956ff0e3 Jun 20 18:25:12.875794 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:25:12.878625 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 18:25:13.498373 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:25:13.502335 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:25:13.509616 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:25:13.510346 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:25:13.535645 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:25:13.559499 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (871) Jun 20 18:25:13.559531 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:13.564099 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:25:13.567503 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:25:13.621610 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:25:13.631937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:25:13.649542 kernel: BTRFS info (device sda6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:13.649161 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:25:13.656087 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:25:13.679532 systemd-networkd[1009]: lo: Link UP Jun 20 18:25:13.679540 systemd-networkd[1009]: lo: Gained carrier Jun 20 18:25:13.681025 systemd-networkd[1009]: Enumeration completed Jun 20 18:25:13.681584 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:13.681587 systemd-networkd[1009]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:25:13.682241 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:25:13.686591 systemd[1]: Reached target network.target - Network. Jun 20 18:25:13.753083 kernel: mlx5_core 85e9:00:02.0 enP34281s1: Link up Jun 20 18:25:13.787099 kernel: hv_netvsc 000d3afc-60ef-000d-3afc-60ef000d3afc eth0: Data path switched to VF: enP34281s1 Jun 20 18:25:13.787229 systemd-networkd[1009]: enP34281s1: Link UP Jun 20 18:25:13.787311 systemd-networkd[1009]: eth0: Link UP Jun 20 18:25:13.787439 systemd-networkd[1009]: eth0: Gained carrier Jun 20 18:25:13.787447 systemd-networkd[1009]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:13.805582 systemd-networkd[1009]: enP34281s1: Gained carrier Jun 20 18:25:13.818094 systemd-networkd[1009]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:25:14.985240 systemd-networkd[1009]: enP34281s1: Gained IPv6LL Jun 20 18:25:14.985428 systemd-networkd[1009]: eth0: Gained IPv6LL Jun 20 18:25:15.641261 ignition[1016]: Ignition 2.21.0 Jun 20 18:25:15.641272 ignition[1016]: Stage: fetch-offline Jun 20 18:25:15.644922 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:25:15.641338 ignition[1016]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:15.652956 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:25:15.641344 ignition[1016]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:15.641435 ignition[1016]: parsed url from cmdline: "" Jun 20 18:25:15.641437 ignition[1016]: no config URL provided Jun 20 18:25:15.641441 ignition[1016]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:25:15.641446 ignition[1016]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:25:15.641451 ignition[1016]: failed to fetch config: resource requires networking Jun 20 18:25:15.641570 ignition[1016]: Ignition finished successfully Jun 20 18:25:15.680884 ignition[1026]: Ignition 2.21.0 Jun 20 18:25:15.680889 ignition[1026]: Stage: fetch Jun 20 18:25:15.681105 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:15.681115 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:15.681188 ignition[1026]: parsed url from cmdline: "" Jun 20 18:25:15.681190 ignition[1026]: no config URL provided Jun 20 18:25:15.681194 ignition[1026]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:25:15.681199 ignition[1026]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:25:15.681235 ignition[1026]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 18:25:15.773632 ignition[1026]: GET result: OK Jun 20 18:25:15.773692 ignition[1026]: config has been read from IMDS userdata Jun 20 18:25:15.773710 ignition[1026]: parsing config with SHA512: 35d5aace313496b952ca25b807e725020178562173162dce2ec56fed1cb0a7e461f87c7b34d6552dbbcbb530ac7fcacd50c60192a43b1e489bb889e144a76834 Jun 20 18:25:15.780129 unknown[1026]: fetched base config from "system" Jun 20 18:25:15.780139 unknown[1026]: fetched base config from "system" Jun 20 18:25:15.780439 ignition[1026]: fetch: fetch complete Jun 20 18:25:15.780151 unknown[1026]: fetched user config from "azure" Jun 20 18:25:15.780444 ignition[1026]: fetch: fetch passed Jun 20 18:25:15.782365 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:25:15.780499 ignition[1026]: Ignition finished successfully Jun 20 18:25:15.787874 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:25:15.821476 ignition[1033]: Ignition 2.21.0 Jun 20 18:25:15.821490 ignition[1033]: Stage: kargs Jun 20 18:25:15.821610 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:15.825651 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:25:15.821616 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:15.832506 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:25:15.822093 ignition[1033]: kargs: kargs passed Jun 20 18:25:15.822123 ignition[1033]: Ignition finished successfully Jun 20 18:25:15.858483 ignition[1040]: Ignition 2.21.0 Jun 20 18:25:15.858487 ignition[1040]: Stage: disks Jun 20 18:25:15.858636 ignition[1040]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:15.863234 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:25:15.858642 ignition[1040]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:15.867537 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:25:15.859101 ignition[1040]: disks: disks passed Jun 20 18:25:15.874917 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:25:15.859133 ignition[1040]: Ignition finished successfully Jun 20 18:25:15.883222 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:25:15.891154 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:25:15.899313 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:25:15.906135 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:25:16.046261 systemd-fsck[1048]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 18:25:16.055041 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:25:16.061084 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:25:16.428194 kernel: EXT4-fs (sda9): mounted filesystem 40d60ae8-3eda-4465-8dd7-9dbfcfd71664 r/w with ordered data mode. Quota mode: none. Jun 20 18:25:16.428758 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:25:16.432616 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:25:16.474841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:25:16.488200 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:25:16.496982 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:25:16.511326 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1062) Jun 20 18:25:16.511768 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:25:16.539388 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:16.539411 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:25:16.539419 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:25:16.511795 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:25:16.534060 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:25:16.540494 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:25:16.557761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:25:17.585823 coreos-metadata[1064]: Jun 20 18:25:17.585 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:25:17.594185 coreos-metadata[1064]: Jun 20 18:25:17.594 INFO Fetch successful Jun 20 18:25:17.597997 coreos-metadata[1064]: Jun 20 18:25:17.594 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:25:17.605898 coreos-metadata[1064]: Jun 20 18:25:17.605 INFO Fetch successful Jun 20 18:25:17.641770 coreos-metadata[1064]: Jun 20 18:25:17.641 INFO wrote hostname ci-4344.1.0-a-c937e4b650 to /sysroot/etc/hostname Jun 20 18:25:17.648603 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:25:18.185396 initrd-setup-root[1092]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:25:18.190542 initrd-setup-root[1099]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:25:18.195878 initrd-setup-root[1106]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:25:18.234863 initrd-setup-root[1113]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:25:20.144222 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:25:20.150132 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:25:20.171522 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:25:20.185541 kernel: BTRFS info (device sda6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:20.187111 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:25:20.204158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:25:20.213632 ignition[1182]: INFO : Ignition 2.21.0 Jun 20 18:25:20.213632 ignition[1182]: INFO : Stage: mount Jun 20 18:25:20.219000 ignition[1182]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:20.219000 ignition[1182]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:20.219000 ignition[1182]: INFO : mount: mount passed Jun 20 18:25:20.219000 ignition[1182]: INFO : Ignition finished successfully Jun 20 18:25:20.217539 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:25:20.223571 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:25:20.251179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:25:20.270105 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1194) Jun 20 18:25:20.281039 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:20.281064 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:25:20.284335 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:25:20.286398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:25:20.308379 ignition[1212]: INFO : Ignition 2.21.0 Jun 20 18:25:20.311779 ignition[1212]: INFO : Stage: files Jun 20 18:25:20.311779 ignition[1212]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:20.311779 ignition[1212]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:20.311779 ignition[1212]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:25:20.327433 ignition[1212]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:25:20.327433 ignition[1212]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:25:20.417951 ignition[1212]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:25:20.417951 ignition[1212]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:25:20.428468 ignition[1212]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:25:20.418232 unknown[1212]: wrote ssh authorized keys file for user: core Jun 20 18:25:20.453013 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 20 18:25:20.461031 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 20 18:25:20.585477 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:25:21.265390 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 20 18:25:21.272698 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:25:21.272698 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:25:21.272698 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:25:21.272698 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:25:21.272698 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:25:21.272698 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:25:21.272698 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:25:21.272698 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:25:21.328592 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:25:21.328592 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:25:21.328592 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jun 20 18:25:21.328592 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jun 20 18:25:21.328592 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jun 20 18:25:21.328592 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jun 20 18:25:21.961262 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 18:25:22.158505 ignition[1212]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jun 20 18:25:22.158505 ignition[1212]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 18:25:22.197293 ignition[1212]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:25:22.209601 ignition[1212]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:25:22.209601 ignition[1212]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 18:25:22.222843 ignition[1212]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:25:22.222843 ignition[1212]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:25:22.222843 ignition[1212]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:25:22.222843 ignition[1212]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:25:22.222843 ignition[1212]: INFO : files: files passed Jun 20 18:25:22.222843 ignition[1212]: INFO : Ignition finished successfully Jun 20 18:25:22.218194 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:25:22.228326 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:25:22.251577 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:25:22.265277 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:25:22.268336 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:25:22.320111 initrd-setup-root-after-ignition[1240]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:25:22.320111 initrd-setup-root-after-ignition[1240]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:25:22.332629 initrd-setup-root-after-ignition[1244]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:25:22.327169 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:25:22.337672 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:25:22.348082 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:25:22.379346 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:25:22.379449 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:25:22.387887 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:25:22.396432 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:25:22.403838 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:25:22.404480 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:25:22.439590 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:25:22.445486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:25:22.465641 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:25:22.470430 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:25:22.479056 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:25:22.486769 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:25:22.486853 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:25:22.497999 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:25:22.502267 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:25:22.509539 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:25:22.517192 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:25:22.525097 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:25:22.533589 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:25:22.542739 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:25:22.551594 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:25:22.561054 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:25:22.569261 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:25:22.578602 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:25:22.585817 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:25:22.585907 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:25:22.596578 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:25:22.601193 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:25:22.609799 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:25:22.613518 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:25:22.618398 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:25:22.618470 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:25:22.631046 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:25:22.631141 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:25:22.636140 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:25:22.636209 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:25:22.644130 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:25:22.644192 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:25:22.658304 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:25:22.716480 ignition[1264]: INFO : Ignition 2.21.0 Jun 20 18:25:22.716480 ignition[1264]: INFO : Stage: umount Jun 20 18:25:22.716480 ignition[1264]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:22.716480 ignition[1264]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:22.716480 ignition[1264]: INFO : umount: umount passed Jun 20 18:25:22.716480 ignition[1264]: INFO : Ignition finished successfully Jun 20 18:25:22.670919 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:25:22.671031 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:25:22.691698 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:25:22.699849 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:25:22.699963 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:25:22.712912 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:25:22.713036 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:25:22.724984 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:25:22.725668 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:25:22.725743 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:25:22.731612 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:25:22.731679 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:25:22.737665 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:25:22.737704 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:25:22.746096 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:25:22.746124 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:25:22.753075 systemd[1]: Stopped target network.target - Network. Jun 20 18:25:22.760601 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:25:22.760650 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:25:22.765286 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:25:22.772594 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:25:22.772629 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:25:22.782045 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:25:22.789123 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:25:22.796408 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:25:22.796444 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:25:22.804411 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:25:22.804437 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:25:22.811995 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:25:22.812031 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:25:22.820080 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:25:22.820107 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:25:22.827414 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:25:22.834515 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:25:22.842283 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:25:22.842371 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:25:22.850392 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:25:22.850446 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:25:22.857795 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:25:22.858265 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:25:23.047527 kernel: hv_netvsc 000d3afc-60ef-000d-3afc-60ef000d3afc eth0: Data path switched from VF: enP34281s1 Jun 20 18:25:22.870557 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:25:22.870652 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:25:22.882472 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:25:22.882620 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:25:22.882717 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:25:22.893649 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:25:22.894272 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 18:25:22.902327 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:25:22.902363 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:25:22.910633 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:25:22.925740 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:25:22.925798 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:25:22.933791 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:25:22.933827 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:25:22.944405 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:25:22.944437 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:25:22.949151 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:25:22.949197 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:25:22.958211 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:25:22.963794 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:25:22.963848 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:25:22.978743 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:25:22.982998 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:25:22.990578 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:25:22.990610 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:25:22.998388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:25:22.998415 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:25:23.006786 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:25:23.006824 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:25:23.018919 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:25:23.018959 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:25:23.029632 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:25:23.029666 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:25:23.048138 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:25:23.062255 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 18:25:23.062313 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:25:23.068130 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:25:23.068169 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:25:23.082430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:23.082469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:23.260465 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jun 20 18:25:23.092067 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 18:25:23.092126 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:25:23.092153 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:25:23.092397 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:25:23.092481 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:25:23.100675 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:25:23.100739 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:25:23.109168 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:25:23.121193 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:25:23.143753 systemd[1]: Switching root. Jun 20 18:25:23.302220 systemd-journald[224]: Journal stopped Jun 20 18:25:30.221480 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:25:30.221498 kernel: SELinux: policy capability open_perms=1 Jun 20 18:25:30.221506 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:25:30.221511 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:25:30.221517 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:25:30.221523 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:25:30.221529 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:25:30.221535 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:25:30.221540 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 18:25:30.221545 kernel: audit: type=1403 audit(1750443925.239:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:25:30.221552 systemd[1]: Successfully loaded SELinux policy in 277.493ms. Jun 20 18:25:30.221559 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.769ms. Jun 20 18:25:30.221566 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:25:30.221572 systemd[1]: Detected virtualization microsoft. Jun 20 18:25:30.221578 systemd[1]: Detected architecture arm64. Jun 20 18:25:30.221585 systemd[1]: Detected first boot. Jun 20 18:25:30.221591 systemd[1]: Hostname set to . Jun 20 18:25:30.221597 systemd[1]: Initializing machine ID from random generator. Jun 20 18:25:30.221603 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:25:30.221608 zram_generator::config[1306]: No configuration found. Jun 20 18:25:30.221615 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:25:30.221621 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:25:30.221628 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:25:30.221634 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:25:30.221639 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:25:30.221645 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:25:30.221652 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:25:30.221658 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:25:30.221664 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:25:30.221671 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:25:30.221678 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:25:30.221684 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:25:30.221689 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:25:30.221696 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:25:30.221702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:25:30.221708 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:25:30.221714 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:25:30.221720 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:25:30.221727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:25:30.221733 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 20 18:25:30.221741 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:25:30.221747 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:25:30.221753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:25:30.221759 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:25:30.221765 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:25:30.221772 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:25:30.221778 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:25:30.221784 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:25:30.221790 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:25:30.221797 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:25:30.221803 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:25:30.221809 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:25:30.221816 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:25:30.221822 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:25:30.221829 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:25:30.221835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:25:30.221841 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:25:30.221847 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:25:30.221854 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:25:30.221860 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:25:30.221866 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:25:30.221872 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:25:30.221878 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:25:30.221885 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:25:30.221891 systemd[1]: Reached target machines.target - Containers. Jun 20 18:25:30.221897 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:25:30.221904 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:25:30.221910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:25:30.221917 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:25:30.221923 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:25:30.221930 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:25:30.221936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:25:30.221942 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:25:30.221948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:25:30.221954 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:25:30.221962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:25:30.221968 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:25:30.221974 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:25:30.221980 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:25:30.221987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:25:30.221993 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:25:30.221999 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:25:30.222005 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:25:30.222012 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:25:30.222018 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:25:30.222024 kernel: fuse: init (API version 7.41) Jun 20 18:25:30.222030 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:25:30.222036 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:25:30.222042 systemd[1]: Stopped verity-setup.service. Jun 20 18:25:30.222048 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:25:30.222054 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:25:30.222080 systemd-journald[1411]: Collecting audit messages is disabled. Jun 20 18:25:30.222095 systemd-journald[1411]: Journal started Jun 20 18:25:30.222110 systemd-journald[1411]: Runtime Journal (/run/log/journal/863c4b7c52604f4c87c051d013cc9aa1) is 8M, max 78.5M, 70.5M free. Jun 20 18:25:29.483406 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:25:29.489474 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:25:29.489826 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:25:29.491210 systemd[1]: systemd-journald.service: Consumed 2.277s CPU time. Jun 20 18:25:30.234280 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:25:30.234326 kernel: loop: module loaded Jun 20 18:25:30.234656 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:25:30.238676 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:25:30.243145 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:25:30.247778 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:25:30.251922 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:25:30.256801 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:25:30.262041 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:25:30.262318 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:25:30.267063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:25:30.267197 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:25:30.271788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:25:30.271896 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:25:30.283096 kernel: ACPI: bus type drm_connector registered Jun 20 18:25:30.281376 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:25:30.281491 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:25:30.286623 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:25:30.286750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:25:30.290801 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:25:30.290906 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:25:30.295763 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:25:30.300538 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:25:30.305777 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:25:30.312570 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:25:30.325166 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:25:30.331814 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:25:30.337354 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:25:30.350600 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:25:30.355246 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:25:30.355271 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:25:30.359970 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:25:30.365672 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:25:30.369558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:25:30.370519 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:25:30.376191 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:25:30.380601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:25:30.381274 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:25:30.385401 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:25:30.387196 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:25:30.392211 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:25:30.397522 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:25:30.402601 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:25:30.407355 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:25:30.441538 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:25:30.446420 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:25:30.447968 systemd-journald[1411]: Time spent on flushing to /var/log/journal/863c4b7c52604f4c87c051d013cc9aa1 is 34.312ms for 938 entries. Jun 20 18:25:30.447968 systemd-journald[1411]: System Journal (/var/log/journal/863c4b7c52604f4c87c051d013cc9aa1) is 11.8M, max 2.6G, 2.6G free. Jun 20 18:25:30.558716 systemd-journald[1411]: Received client request to flush runtime journal. Jun 20 18:25:30.558760 systemd-journald[1411]: /var/log/journal/863c4b7c52604f4c87c051d013cc9aa1/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jun 20 18:25:30.558777 systemd-journald[1411]: Rotating system journal. Jun 20 18:25:30.558792 kernel: loop0: detected capacity change from 0 to 138376 Jun 20 18:25:30.456252 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:25:30.559869 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:25:30.569247 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:25:30.570393 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:25:30.641881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:25:30.836737 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:25:30.842524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:25:31.072168 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Jun 20 18:25:31.072505 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Jun 20 18:25:31.108374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:25:31.497120 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:25:31.511098 kernel: loop1: detected capacity change from 0 to 203944 Jun 20 18:25:31.564124 kernel: loop2: detected capacity change from 0 to 107312 Jun 20 18:25:31.931112 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:25:31.937587 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:25:31.964057 systemd-udevd[1469]: Using default interface naming scheme 'v255'. Jun 20 18:25:32.336090 kernel: loop3: detected capacity change from 0 to 28936 Jun 20 18:25:32.370499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:25:32.378807 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:25:32.429311 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 20 18:25:32.468425 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:25:32.502097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#293 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:25:32.556594 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:25:32.578183 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:25:32.668035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:32.688298 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 18:25:32.688349 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 18:25:32.691181 kernel: hv_vmbus: registering driver hv_balloon Jun 20 18:25:32.691234 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 18:25:32.699745 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 18:25:32.700122 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 20 18:25:32.700166 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:25:32.711403 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:25:32.717762 systemd-networkd[1484]: lo: Link UP Jun 20 18:25:32.717984 systemd-networkd[1484]: lo: Gained carrier Jun 20 18:25:32.719528 systemd-networkd[1484]: Enumeration completed Jun 20 18:25:32.719667 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:25:32.719945 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:32.719951 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:25:32.725348 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:25:32.731169 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:25:32.737438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:32.740212 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:32.746888 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:25:32.747834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:32.764091 kernel: mlx5_core 85e9:00:02.0 enP34281s1: Link up Jun 20 18:25:32.784097 kernel: hv_netvsc 000d3afc-60ef-000d-3afc-60ef000d3afc eth0: Data path switched to VF: enP34281s1 Jun 20 18:25:32.784506 systemd-networkd[1484]: enP34281s1: Link UP Jun 20 18:25:32.784574 systemd-networkd[1484]: eth0: Link UP Jun 20 18:25:32.784576 systemd-networkd[1484]: eth0: Gained carrier Jun 20 18:25:32.784587 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:32.791260 systemd-networkd[1484]: enP34281s1: Gained carrier Jun 20 18:25:32.805099 systemd-networkd[1484]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:25:32.818095 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:25:32.845121 kernel: MACsec IEEE 802.1AE Jun 20 18:25:32.975305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:25:32.981154 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:25:33.028206 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:25:33.044090 kernel: loop4: detected capacity change from 0 to 138376 Jun 20 18:25:33.051083 kernel: loop5: detected capacity change from 0 to 203944 Jun 20 18:25:33.057090 kernel: loop6: detected capacity change from 0 to 107312 Jun 20 18:25:33.062084 kernel: loop7: detected capacity change from 0 to 28936 Jun 20 18:25:33.064211 (sd-merge)[1616]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 18:25:33.064606 (sd-merge)[1616]: Merged extensions into '/usr'. Jun 20 18:25:33.067622 systemd[1]: Reload requested from client PID 1446 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:25:33.067635 systemd[1]: Reloading... Jun 20 18:25:33.114094 zram_generator::config[1646]: No configuration found. Jun 20 18:25:33.183926 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:25:33.294207 systemd[1]: Reloading finished in 226 ms. Jun 20 18:25:33.315063 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:25:33.324100 systemd[1]: Starting ensure-sysext.service... Jun 20 18:25:33.329470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:25:33.339423 systemd[1]: Reload requested from client PID 1702 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:25:33.339514 systemd[1]: Reloading... Jun 20 18:25:33.343673 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 18:25:33.343693 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 18:25:33.343861 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:25:33.343991 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:25:33.344450 systemd-tmpfiles[1703]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:25:33.344582 systemd-tmpfiles[1703]: ACLs are not supported, ignoring. Jun 20 18:25:33.344610 systemd-tmpfiles[1703]: ACLs are not supported, ignoring. Jun 20 18:25:33.385865 systemd-tmpfiles[1703]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:25:33.385874 systemd-tmpfiles[1703]: Skipping /boot Jun 20 18:25:33.391143 zram_generator::config[1736]: No configuration found. Jun 20 18:25:33.395849 systemd-tmpfiles[1703]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:25:33.395860 systemd-tmpfiles[1703]: Skipping /boot Jun 20 18:25:33.464204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:25:33.544674 systemd[1]: Reloading finished in 204 ms. Jun 20 18:25:33.566960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:33.579412 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:25:33.595475 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:25:33.607809 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:25:33.612311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:25:33.615251 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:25:33.619920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:25:33.626253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:25:33.630113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:25:33.630201 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:25:33.636717 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:25:33.644179 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:25:33.651167 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:25:33.658192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:25:33.662220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:25:33.667380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:25:33.667572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:25:33.672673 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:25:33.672895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:25:33.686225 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:25:33.694826 systemd[1]: Finished ensure-sysext.service. Jun 20 18:25:33.699851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:25:33.701212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:25:33.714195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:25:33.720259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:25:33.725298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:25:33.729815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:25:33.729845 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:25:33.729882 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:25:33.735848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:25:33.738205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:25:33.742702 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:25:33.742839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:25:33.747444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:25:33.747561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:25:33.752490 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:25:33.752603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:25:33.760534 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:25:33.760606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:25:33.772407 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:25:33.804320 systemd-resolved[1800]: Positive Trust Anchors: Jun 20 18:25:33.804335 systemd-resolved[1800]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:25:33.804354 systemd-resolved[1800]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:25:33.929180 systemd-networkd[1484]: eth0: Gained IPv6LL Jun 20 18:25:33.931138 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:25:33.992363 augenrules[1837]: No rules Jun 20 18:25:33.993443 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:25:33.993623 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:25:34.020465 systemd-resolved[1800]: Using system hostname 'ci-4344.1.0-a-c937e4b650'. Jun 20 18:25:34.021740 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:25:34.026095 systemd[1]: Reached target network.target - Network. Jun 20 18:25:34.029634 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:25:34.033898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:25:34.546211 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:25:34.551339 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:25:34.633223 systemd-networkd[1484]: enP34281s1: Gained IPv6LL Jun 20 18:25:41.669133 ldconfig[1441]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:25:41.685917 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:25:41.692707 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:25:41.710206 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:25:41.714779 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:25:41.718836 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:25:41.723806 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:25:41.728959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:25:41.733144 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:25:41.738141 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:25:41.743101 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:25:41.743124 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:25:41.749986 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:25:41.754691 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:25:41.759946 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:25:41.765390 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:25:41.770475 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:25:41.775523 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:25:41.781246 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:25:41.785568 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:25:41.790348 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:25:41.795210 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:25:41.798793 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:25:41.802560 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:25:41.802579 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:25:41.837942 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 18:25:41.849161 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:25:41.855792 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:25:41.864204 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:25:41.870241 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:25:41.883014 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:25:41.889056 (chronyd)[1849]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 18:25:41.898924 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:25:41.900322 jq[1857]: false Jun 20 18:25:41.903092 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:25:41.905937 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 18:25:41.910562 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 18:25:41.911318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:25:41.926178 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:25:41.934216 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:25:41.941254 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:25:41.947545 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:25:41.954196 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:25:41.961653 KVP[1859]: KVP starting; pid is:1859 Jun 20 18:25:41.963134 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:25:41.966112 chronyd[1875]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 18:25:41.971021 KVP[1859]: KVP LIC Version: 3.1 Jun 20 18:25:41.971203 kernel: hv_utils: KVP IC version 4.0 Jun 20 18:25:41.971941 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:25:41.975282 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:25:41.975823 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:25:41.982177 extend-filesystems[1858]: Found /dev/sda6 Jun 20 18:25:41.986004 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:25:41.999205 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:25:42.006182 jq[1878]: true Jun 20 18:25:42.008424 chronyd[1875]: Timezone right/UTC failed leap second check, ignoring Jun 20 18:25:42.008559 chronyd[1875]: Loaded seccomp filter (level 2) Jun 20 18:25:42.009540 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 18:25:42.014138 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:25:42.017224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:25:42.018832 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:25:42.018984 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:25:42.025604 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:25:42.025757 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:25:42.045878 (ntainerd)[1891]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:25:42.048725 jq[1889]: true Jun 20 18:25:42.064392 extend-filesystems[1858]: Found /dev/sda9 Jun 20 18:25:42.069409 extend-filesystems[1858]: Checking size of /dev/sda9 Jun 20 18:25:42.092641 systemd-logind[1872]: New seat seat0. Jun 20 18:25:42.094842 systemd-logind[1872]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jun 20 18:25:42.095012 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:25:42.111487 update_engine[1877]: I20250620 18:25:42.109386 1877 main.cc:92] Flatcar Update Engine starting Jun 20 18:25:42.134627 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:25:42.165473 extend-filesystems[1858]: Old size kept for /dev/sda9 Jun 20 18:25:42.169060 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:25:42.169228 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:25:42.192102 sshd_keygen[1876]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:25:42.216065 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:25:42.225217 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:25:42.230181 bash[1919]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:25:42.232267 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 18:25:42.239902 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:25:42.248783 tar[1887]: linux-arm64/helm Jun 20 18:25:42.250037 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 18:25:42.271961 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:25:42.272213 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:25:42.293042 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:25:42.311993 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 18:25:42.345052 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:25:42.352690 dbus-daemon[1855]: [system] SELinux support is enabled Jun 20 18:25:42.355532 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:25:42.358707 update_engine[1877]: I20250620 18:25:42.358660 1877 update_check_scheduler.cc:74] Next update check in 10m7s Jun 20 18:25:42.370999 dbus-daemon[1855]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 18:25:42.371304 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:25:42.380452 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 20 18:25:42.389363 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:25:42.395211 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:25:42.395239 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:25:42.403323 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:25:42.403339 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:25:42.409346 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:25:42.416970 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:25:42.456161 coreos-metadata[1851]: Jun 20 18:25:42.454 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:25:42.459284 coreos-metadata[1851]: Jun 20 18:25:42.459 INFO Fetch successful Jun 20 18:25:42.459823 coreos-metadata[1851]: Jun 20 18:25:42.459 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 18:25:42.464215 coreos-metadata[1851]: Jun 20 18:25:42.464 INFO Fetch successful Jun 20 18:25:42.464484 coreos-metadata[1851]: Jun 20 18:25:42.464 INFO Fetching http://168.63.129.16/machine/1fb81d32-abc7-4e61-be51-ede37b8640f0/a8adebd9%2D17e9%2D4304%2Dadaf%2Dc292bbaadc53.%5Fci%2D4344.1.0%2Da%2Dc937e4b650?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 18:25:42.466859 coreos-metadata[1851]: Jun 20 18:25:42.466 INFO Fetch successful Jun 20 18:25:42.466859 coreos-metadata[1851]: Jun 20 18:25:42.466 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:25:42.474635 coreos-metadata[1851]: Jun 20 18:25:42.474 INFO Fetch successful Jun 20 18:25:42.505310 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:25:42.512471 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:25:42.606148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:25:42.617347 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:25:42.677785 tar[1887]: linux-arm64/LICENSE Jun 20 18:25:42.678067 tar[1887]: linux-arm64/README.md Jun 20 18:25:42.688857 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:25:42.831415 locksmithd[2021]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:25:42.862815 kubelet[2032]: E0620 18:25:42.862753 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:25:42.864726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:25:42.864830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:25:42.865161 systemd[1]: kubelet.service: Consumed 539ms CPU time, 255.1M memory peak. Jun 20 18:25:43.151383 containerd[1891]: time="2025-06-20T18:25:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 18:25:43.151842 containerd[1891]: time="2025-06-20T18:25:43.151811248Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157400248Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.776µs" Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157427016Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157439696Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157588040Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157602912Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157618728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157653984Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157660904Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157803136Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157813216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157820608Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158085 containerd[1891]: time="2025-06-20T18:25:43.157825760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158274 containerd[1891]: time="2025-06-20T18:25:43.157877352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158274 containerd[1891]: time="2025-06-20T18:25:43.158014504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158274 containerd[1891]: time="2025-06-20T18:25:43.158032600Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:25:43.158274 containerd[1891]: time="2025-06-20T18:25:43.158038720Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 18:25:43.158274 containerd[1891]: time="2025-06-20T18:25:43.158066464Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 18:25:43.158274 containerd[1891]: time="2025-06-20T18:25:43.158224832Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 18:25:43.158346 containerd[1891]: time="2025-06-20T18:25:43.158277016Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:25:43.172820 containerd[1891]: time="2025-06-20T18:25:43.172788120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172834560Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172844824Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172852824Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172859992Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172870720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172878016Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172885072Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172892408Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172898704Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 18:25:43.172907 containerd[1891]: time="2025-06-20T18:25:43.172904160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 18:25:43.173034 containerd[1891]: time="2025-06-20T18:25:43.172912672Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 18:25:43.173034 containerd[1891]: time="2025-06-20T18:25:43.173005416Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 18:25:43.173034 containerd[1891]: time="2025-06-20T18:25:43.173020584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 18:25:43.173034 containerd[1891]: time="2025-06-20T18:25:43.173032808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 18:25:43.173092 containerd[1891]: time="2025-06-20T18:25:43.173040760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 18:25:43.173092 containerd[1891]: time="2025-06-20T18:25:43.173050608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 18:25:43.173092 containerd[1891]: time="2025-06-20T18:25:43.173057696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 18:25:43.173092 containerd[1891]: time="2025-06-20T18:25:43.173064376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 18:25:43.173092 containerd[1891]: time="2025-06-20T18:25:43.173087344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 18:25:43.173150 containerd[1891]: time="2025-06-20T18:25:43.173095016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 18:25:43.173150 containerd[1891]: time="2025-06-20T18:25:43.173105640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 18:25:43.173150 containerd[1891]: time="2025-06-20T18:25:43.173112064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 18:25:43.173185 containerd[1891]: time="2025-06-20T18:25:43.173163744Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 18:25:43.173185 containerd[1891]: time="2025-06-20T18:25:43.173173808Z" level=info msg="Start snapshots syncer" Jun 20 18:25:43.173208 containerd[1891]: time="2025-06-20T18:25:43.173188432Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 18:25:43.174724 containerd[1891]: time="2025-06-20T18:25:43.174204664Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 18:25:43.174724 containerd[1891]: time="2025-06-20T18:25:43.174263504Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174324424Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174428544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174443600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174451096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174457592Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174464920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174471632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174482568Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174503288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174510096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174516728Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174536072Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174545648Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:25:43.174844 containerd[1891]: time="2025-06-20T18:25:43.174550736Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174556792Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174565776Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174571568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174581056Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174591344Z" level=info msg="runtime interface created" Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174594432Z" level=info msg="created NRI interface" Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174599512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174609360Z" level=info msg="Connect containerd service" Jun 20 18:25:43.175021 containerd[1891]: time="2025-06-20T18:25:43.174626896Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:25:43.175869 containerd[1891]: time="2025-06-20T18:25:43.175841512Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:25:44.497160 containerd[1891]: time="2025-06-20T18:25:44.497090400Z" level=info msg="Start subscribing containerd event" Jun 20 18:25:44.498088 containerd[1891]: time="2025-06-20T18:25:44.497623544Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:25:44.498088 containerd[1891]: time="2025-06-20T18:25:44.497664280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:25:44.498188 containerd[1891]: time="2025-06-20T18:25:44.498173016Z" level=info msg="Start recovering state" Jun 20 18:25:44.498297 containerd[1891]: time="2025-06-20T18:25:44.498287360Z" level=info msg="Start event monitor" Jun 20 18:25:44.498380 containerd[1891]: time="2025-06-20T18:25:44.498368320Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:25:44.498416 containerd[1891]: time="2025-06-20T18:25:44.498406168Z" level=info msg="Start streaming server" Jun 20 18:25:44.498460 containerd[1891]: time="2025-06-20T18:25:44.498448840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 18:25:44.498504 containerd[1891]: time="2025-06-20T18:25:44.498495560Z" level=info msg="runtime interface starting up..." Jun 20 18:25:44.498535 containerd[1891]: time="2025-06-20T18:25:44.498526976Z" level=info msg="starting plugins..." Jun 20 18:25:44.498586 containerd[1891]: time="2025-06-20T18:25:44.498577720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 18:25:44.498807 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:25:44.503799 containerd[1891]: time="2025-06-20T18:25:44.503773616Z" level=info msg="containerd successfully booted in 1.352838s" Jun 20 18:25:44.505321 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:25:44.514116 systemd[1]: Startup finished in 1.624s (kernel) + 16.832s (initrd) + 19.551s (userspace) = 38.008s. Jun 20 18:25:45.096265 login[2019]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:45.097007 login[2020]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:25:45.107789 systemd-logind[1872]: New session 2 of user core. Jun 20 18:25:45.108303 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:25:45.109149 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:25:45.113385 systemd-logind[1872]: New session 1 of user core. Jun 20 18:25:45.152548 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:25:45.154213 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:25:45.163596 (systemd)[2071]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:25:45.165485 systemd-logind[1872]: New session c1 of user core. Jun 20 18:25:45.267358 systemd[2071]: Queued start job for default target default.target. Jun 20 18:25:45.271691 systemd[2071]: Created slice app.slice - User Application Slice. Jun 20 18:25:45.271800 systemd[2071]: Reached target paths.target - Paths. Jun 20 18:25:45.271926 systemd[2071]: Reached target timers.target - Timers. Jun 20 18:25:45.274173 systemd[2071]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:25:45.279387 systemd[2071]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:25:45.279427 systemd[2071]: Reached target sockets.target - Sockets. Jun 20 18:25:45.279455 systemd[2071]: Reached target basic.target - Basic System. Jun 20 18:25:45.279474 systemd[2071]: Reached target default.target - Main User Target. Jun 20 18:25:45.279494 systemd[2071]: Startup finished in 109ms. Jun 20 18:25:45.279696 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:25:45.280710 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:25:45.281233 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:25:45.629348 waagent[2002]: 2025-06-20T18:25:45.629284Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 18:25:45.633654 waagent[2002]: 2025-06-20T18:25:45.633617Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 18:25:45.636974 waagent[2002]: 2025-06-20T18:25:45.636947Z INFO Daemon Daemon Python: 3.11.12 Jun 20 18:25:45.640205 waagent[2002]: 2025-06-20T18:25:45.640160Z INFO Daemon Daemon Run daemon Jun 20 18:25:45.643201 waagent[2002]: 2025-06-20T18:25:45.643170Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 18:25:45.649478 waagent[2002]: 2025-06-20T18:25:45.649408Z INFO Daemon Daemon Using waagent for provisioning Jun 20 18:25:45.653122 waagent[2002]: 2025-06-20T18:25:45.653092Z INFO Daemon Daemon Activate resource disk Jun 20 18:25:45.656330 waagent[2002]: 2025-06-20T18:25:45.656305Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 18:25:45.664071 waagent[2002]: 2025-06-20T18:25:45.664034Z INFO Daemon Daemon Found device: None Jun 20 18:25:45.667292 waagent[2002]: 2025-06-20T18:25:45.667265Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 18:25:45.672880 waagent[2002]: 2025-06-20T18:25:45.672850Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 18:25:45.680790 waagent[2002]: 2025-06-20T18:25:45.680754Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:25:45.684691 waagent[2002]: 2025-06-20T18:25:45.684664Z INFO Daemon Daemon Running default provisioning handler Jun 20 18:25:45.693172 waagent[2002]: 2025-06-20T18:25:45.693137Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 18:25:45.702811 waagent[2002]: 2025-06-20T18:25:45.702776Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 18:25:45.709374 waagent[2002]: 2025-06-20T18:25:45.709346Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 18:25:45.712844 waagent[2002]: 2025-06-20T18:25:45.712822Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 18:25:45.880855 waagent[2002]: 2025-06-20T18:25:45.880130Z INFO Daemon Daemon Successfully mounted dvd Jun 20 18:25:45.890802 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 18:25:45.892725 waagent[2002]: 2025-06-20T18:25:45.892679Z INFO Daemon Daemon Detect protocol endpoint Jun 20 18:25:45.896106 waagent[2002]: 2025-06-20T18:25:45.896065Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:25:45.899936 waagent[2002]: 2025-06-20T18:25:45.899908Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 18:25:45.904408 waagent[2002]: 2025-06-20T18:25:45.904371Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 18:25:45.907993 waagent[2002]: 2025-06-20T18:25:45.907961Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 18:25:45.911433 waagent[2002]: 2025-06-20T18:25:45.911407Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 18:25:46.031831 waagent[2002]: 2025-06-20T18:25:46.031786Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 18:25:46.036605 waagent[2002]: 2025-06-20T18:25:46.036583Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 18:25:46.040184 waagent[2002]: 2025-06-20T18:25:46.040159Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 18:25:46.169493 waagent[2002]: 2025-06-20T18:25:46.169373Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 18:25:46.173907 waagent[2002]: 2025-06-20T18:25:46.173875Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 18:25:46.180947 waagent[2002]: 2025-06-20T18:25:46.180912Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:25:46.202546 waagent[2002]: 2025-06-20T18:25:46.202513Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 18:25:46.206527 waagent[2002]: 2025-06-20T18:25:46.206495Z INFO Daemon Jun 20 18:25:46.208521 waagent[2002]: 2025-06-20T18:25:46.208493Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 74ec0d4b-6a22-45b3-84ad-0f346b61d1ab eTag: 1983239741044447569 source: Fabric] Jun 20 18:25:46.216240 waagent[2002]: 2025-06-20T18:25:46.216209Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 18:25:46.220680 waagent[2002]: 2025-06-20T18:25:46.220649Z INFO Daemon Jun 20 18:25:46.222565 waagent[2002]: 2025-06-20T18:25:46.222540Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:25:46.230356 waagent[2002]: 2025-06-20T18:25:46.230328Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 18:25:46.288670 waagent[2002]: 2025-06-20T18:25:46.288621Z INFO Daemon Downloaded certificate {'thumbprint': 'B4F86089593BE910E351422C147C02E900750C17', 'hasPrivateKey': False} Jun 20 18:25:46.295793 waagent[2002]: 2025-06-20T18:25:46.295755Z INFO Daemon Downloaded certificate {'thumbprint': '399330E85FE1B8E8EF4F3352A29CAA52E2BBC819', 'hasPrivateKey': True} Jun 20 18:25:46.302510 waagent[2002]: 2025-06-20T18:25:46.302479Z INFO Daemon Fetch goal state completed Jun 20 18:25:46.310980 waagent[2002]: 2025-06-20T18:25:46.310955Z INFO Daemon Daemon Starting provisioning Jun 20 18:25:46.314686 waagent[2002]: 2025-06-20T18:25:46.314659Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 18:25:46.317997 waagent[2002]: 2025-06-20T18:25:46.317975Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-c937e4b650] Jun 20 18:25:46.360104 waagent[2002]: 2025-06-20T18:25:46.359688Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-c937e4b650] Jun 20 18:25:46.364127 waagent[2002]: 2025-06-20T18:25:46.364095Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 18:25:46.368425 waagent[2002]: 2025-06-20T18:25:46.368397Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 18:25:46.377684 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:46.377691 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:25:46.377731 systemd-networkd[1484]: eth0: DHCP lease lost Jun 20 18:25:46.378687 waagent[2002]: 2025-06-20T18:25:46.378632Z INFO Daemon Daemon Create user account if not exists Jun 20 18:25:46.382397 waagent[2002]: 2025-06-20T18:25:46.382366Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 18:25:46.386780 waagent[2002]: 2025-06-20T18:25:46.386751Z INFO Daemon Daemon Configure sudoer Jun 20 18:25:46.396706 waagent[2002]: 2025-06-20T18:25:46.393871Z INFO Daemon Daemon Configure sshd Jun 20 18:25:46.400347 waagent[2002]: 2025-06-20T18:25:46.400309Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 18:25:46.409656 waagent[2002]: 2025-06-20T18:25:46.409624Z INFO Daemon Daemon Deploy ssh public key. Jun 20 18:25:46.410108 systemd-networkd[1484]: eth0: DHCPv4 address 10.200.20.16/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:25:47.520858 waagent[2002]: 2025-06-20T18:25:47.520813Z INFO Daemon Daemon Provisioning complete Jun 20 18:25:47.533594 waagent[2002]: 2025-06-20T18:25:47.533564Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 18:25:47.538179 waagent[2002]: 2025-06-20T18:25:47.538151Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 18:25:47.545968 waagent[2002]: 2025-06-20T18:25:47.545944Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 18:25:47.640905 waagent[2126]: 2025-06-20T18:25:47.640841Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 18:25:47.641677 waagent[2126]: 2025-06-20T18:25:47.641202Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 18:25:47.641677 waagent[2126]: 2025-06-20T18:25:47.641252Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 18:25:47.641677 waagent[2126]: 2025-06-20T18:25:47.641284Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jun 20 18:25:48.354089 waagent[2126]: 2025-06-20T18:25:48.353967Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 18:25:48.354251 waagent[2126]: 2025-06-20T18:25:48.354221Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:25:48.354287 waagent[2126]: 2025-06-20T18:25:48.354276Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:25:48.360023 waagent[2126]: 2025-06-20T18:25:48.359979Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:25:48.364265 waagent[2126]: 2025-06-20T18:25:48.364235Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 18:25:48.364604 waagent[2126]: 2025-06-20T18:25:48.364572Z INFO ExtHandler Jun 20 18:25:48.364653 waagent[2126]: 2025-06-20T18:25:48.364636Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4093d65a-f9d6-40a5-b4b0-b2f8743faad7 eTag: 1983239741044447569 source: Fabric] Jun 20 18:25:48.364863 waagent[2126]: 2025-06-20T18:25:48.364838Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 18:25:48.365293 waagent[2126]: 2025-06-20T18:25:48.365263Z INFO ExtHandler Jun 20 18:25:48.365330 waagent[2126]: 2025-06-20T18:25:48.365315Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:25:48.368281 waagent[2126]: 2025-06-20T18:25:48.368258Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 18:25:48.446201 waagent[2126]: 2025-06-20T18:25:48.446154Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B4F86089593BE910E351422C147C02E900750C17', 'hasPrivateKey': False} Jun 20 18:25:48.446509 waagent[2126]: 2025-06-20T18:25:48.446476Z INFO ExtHandler Downloaded certificate {'thumbprint': '399330E85FE1B8E8EF4F3352A29CAA52E2BBC819', 'hasPrivateKey': True} Jun 20 18:25:48.446799 waagent[2126]: 2025-06-20T18:25:48.446769Z INFO ExtHandler Fetch goal state completed Jun 20 18:25:48.457427 waagent[2126]: 2025-06-20T18:25:48.457386Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 18:25:48.460555 waagent[2126]: 2025-06-20T18:25:48.460513Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2126 Jun 20 18:25:48.460657 waagent[2126]: 2025-06-20T18:25:48.460634Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 18:25:48.460886 waagent[2126]: 2025-06-20T18:25:48.460861Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 18:25:48.461936 waagent[2126]: 2025-06-20T18:25:48.461902Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 18:25:48.462325 waagent[2126]: 2025-06-20T18:25:48.462294Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 18:25:48.462435 waagent[2126]: 2025-06-20T18:25:48.462414Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 18:25:48.462848 waagent[2126]: 2025-06-20T18:25:48.462819Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 18:25:48.545222 waagent[2126]: 2025-06-20T18:25:48.545193Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 18:25:48.545348 waagent[2126]: 2025-06-20T18:25:48.545324Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 18:25:48.549567 waagent[2126]: 2025-06-20T18:25:48.549533Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 18:25:48.553894 systemd[1]: Reload requested from client PID 2145 ('systemctl') (unit waagent.service)... Jun 20 18:25:48.553905 systemd[1]: Reloading... Jun 20 18:25:48.620220 zram_generator::config[2192]: No configuration found. Jun 20 18:25:48.678614 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:25:48.758224 systemd[1]: Reloading finished in 204 ms. Jun 20 18:25:48.785082 waagent[2126]: 2025-06-20T18:25:48.784511Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 18:25:48.785082 waagent[2126]: 2025-06-20T18:25:48.784651Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 18:25:49.515828 waagent[2126]: 2025-06-20T18:25:49.515752Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 18:25:49.516119 waagent[2126]: 2025-06-20T18:25:49.516088Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 18:25:49.516765 waagent[2126]: 2025-06-20T18:25:49.516724Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 18:25:49.517057 waagent[2126]: 2025-06-20T18:25:49.517014Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 18:25:49.517313 waagent[2126]: 2025-06-20T18:25:49.517276Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:25:49.517474 waagent[2126]: 2025-06-20T18:25:49.517440Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 18:25:49.517584 waagent[2126]: 2025-06-20T18:25:49.517549Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 18:25:49.518091 waagent[2126]: 2025-06-20T18:25:49.517729Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:25:49.518091 waagent[2126]: 2025-06-20T18:25:49.517861Z INFO EnvHandler ExtHandler Configure routes Jun 20 18:25:49.518091 waagent[2126]: 2025-06-20T18:25:49.517903Z INFO EnvHandler ExtHandler Gateway:None Jun 20 18:25:49.518091 waagent[2126]: 2025-06-20T18:25:49.517927Z INFO EnvHandler ExtHandler Routes:None Jun 20 18:25:49.518315 waagent[2126]: 2025-06-20T18:25:49.518288Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:25:49.518536 waagent[2126]: 2025-06-20T18:25:49.518509Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:25:49.518755 waagent[2126]: 2025-06-20T18:25:49.518723Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 18:25:49.519190 waagent[2126]: 2025-06-20T18:25:49.519160Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 18:25:49.519412 waagent[2126]: 2025-06-20T18:25:49.519383Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 18:25:49.519412 waagent[2126]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 18:25:49.519412 waagent[2126]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 18:25:49.519412 waagent[2126]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 18:25:49.519412 waagent[2126]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:25:49.519412 waagent[2126]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:25:49.519412 waagent[2126]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:25:49.519612 waagent[2126]: 2025-06-20T18:25:49.519589Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 18:25:49.520157 waagent[2126]: 2025-06-20T18:25:49.520111Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 18:25:49.527092 waagent[2126]: 2025-06-20T18:25:49.526891Z INFO ExtHandler ExtHandler Jun 20 18:25:49.527092 waagent[2126]: 2025-06-20T18:25:49.526946Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bc39595d-f157-4061-bf57-0936f3d29cde correlation 3683bcbb-7b54-4e5b-a008-f8244ea420cf created: 2025-06-20T18:23:45.162857Z] Jun 20 18:25:49.527379 waagent[2126]: 2025-06-20T18:25:49.527348Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 18:25:49.527888 waagent[2126]: 2025-06-20T18:25:49.527862Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 20 18:25:49.564651 waagent[2126]: 2025-06-20T18:25:49.564600Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 18:25:49.564651 waagent[2126]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 18:25:49.564943 waagent[2126]: 2025-06-20T18:25:49.564912Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2343C409-9A77-4B67-9996-2CE9C88A7C0B;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 18:25:49.644041 waagent[2126]: 2025-06-20T18:25:49.643741Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 18:25:49.644041 waagent[2126]: Executing ['ip', '-a', '-o', 'link']: Jun 20 18:25:49.644041 waagent[2126]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 18:25:49.644041 waagent[2126]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:60:ef brd ff:ff:ff:ff:ff:ff Jun 20 18:25:49.644041 waagent[2126]: 3: enP34281s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:60:ef brd ff:ff:ff:ff:ff:ff\ altname enP34281p0s2 Jun 20 18:25:49.644041 waagent[2126]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 18:25:49.644041 waagent[2126]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 18:25:49.644041 waagent[2126]: 2: eth0 inet 10.200.20.16/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 18:25:49.644041 waagent[2126]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 18:25:49.644041 waagent[2126]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 18:25:49.644041 waagent[2126]: 2: eth0 inet6 fe80::20d:3aff:fefc:60ef/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:25:49.644041 waagent[2126]: 3: enP34281s1 inet6 fe80::20d:3aff:fefc:60ef/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:25:49.712642 waagent[2126]: 2025-06-20T18:25:49.712596Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 18:25:49.712642 waagent[2126]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:25:49.712642 waagent[2126]: pkts bytes target prot opt in out source destination Jun 20 18:25:49.712642 waagent[2126]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:25:49.712642 waagent[2126]: pkts bytes target prot opt in out source destination Jun 20 18:25:49.712642 waagent[2126]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:25:49.712642 waagent[2126]: pkts bytes target prot opt in out source destination Jun 20 18:25:49.712642 waagent[2126]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:25:49.712642 waagent[2126]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:25:49.712642 waagent[2126]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:25:49.714856 waagent[2126]: 2025-06-20T18:25:49.714814Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 18:25:49.714856 waagent[2126]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:25:49.714856 waagent[2126]: pkts bytes target prot opt in out source destination Jun 20 18:25:49.714856 waagent[2126]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:25:49.714856 waagent[2126]: pkts bytes target prot opt in out source destination Jun 20 18:25:49.714856 waagent[2126]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:25:49.714856 waagent[2126]: pkts bytes target prot opt in out source destination Jun 20 18:25:49.714856 waagent[2126]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:25:49.714856 waagent[2126]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:25:49.714856 waagent[2126]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:25:49.715033 waagent[2126]: 2025-06-20T18:25:49.715009Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 20 18:25:53.049249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:25:53.050646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:25:53.146383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:25:53.153468 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:25:53.250731 kubelet[2278]: E0620 18:25:53.250680 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:25:53.253412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:25:53.253525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:25:53.253808 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107.5M memory peak. Jun 20 18:26:01.992045 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:26:01.992992 systemd[1]: Started sshd@0-10.200.20.16:22-10.200.16.10:46844.service - OpenSSH per-connection server daemon (10.200.16.10:46844). Jun 20 18:26:02.859245 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 46844 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:02.860274 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:02.864129 systemd-logind[1872]: New session 3 of user core. Jun 20 18:26:02.871172 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:26:03.272874 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:26:03.274023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:03.276259 systemd[1]: Started sshd@1-10.200.20.16:22-10.200.16.10:46854.service - OpenSSH per-connection server daemon (10.200.16.10:46854). Jun 20 18:26:03.391881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:03.394095 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:26:03.510280 kubelet[2301]: E0620 18:26:03.510242 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:26:03.512184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:26:03.512370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:26:03.512823 systemd[1]: kubelet.service: Consumed 100ms CPU time, 105.3M memory peak. Jun 20 18:26:03.728374 sshd[2292]: Accepted publickey for core from 10.200.16.10 port 46854 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:03.729487 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:03.732902 systemd-logind[1872]: New session 4 of user core. Jun 20 18:26:03.740189 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:26:04.061832 sshd[2308]: Connection closed by 10.200.16.10 port 46854 Jun 20 18:26:04.061757 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:04.064465 systemd[1]: sshd@1-10.200.20.16:22-10.200.16.10:46854.service: Deactivated successfully. Jun 20 18:26:04.065766 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:26:04.066328 systemd-logind[1872]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:26:04.067599 systemd-logind[1872]: Removed session 4. Jun 20 18:26:04.149252 systemd[1]: Started sshd@2-10.200.20.16:22-10.200.16.10:46866.service - OpenSSH per-connection server daemon (10.200.16.10:46866). Jun 20 18:26:04.619484 sshd[2314]: Accepted publickey for core from 10.200.16.10 port 46866 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:04.620514 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:04.624020 systemd-logind[1872]: New session 5 of user core. Jun 20 18:26:04.635188 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:26:04.967359 sshd[2316]: Connection closed by 10.200.16.10 port 46866 Jun 20 18:26:04.967831 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:04.970762 systemd[1]: sshd@2-10.200.20.16:22-10.200.16.10:46866.service: Deactivated successfully. Jun 20 18:26:04.972131 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:26:04.972695 systemd-logind[1872]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:26:04.973644 systemd-logind[1872]: Removed session 5. Jun 20 18:26:05.057374 systemd[1]: Started sshd@3-10.200.20.16:22-10.200.16.10:46874.service - OpenSSH per-connection server daemon (10.200.16.10:46874). Jun 20 18:26:05.543472 sshd[2322]: Accepted publickey for core from 10.200.16.10 port 46874 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:05.544490 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:05.547984 systemd-logind[1872]: New session 6 of user core. Jun 20 18:26:05.555191 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:26:05.796957 chronyd[1875]: Selected source PHC0 Jun 20 18:26:05.890967 sshd[2324]: Connection closed by 10.200.16.10 port 46874 Jun 20 18:26:05.891372 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:05.894098 systemd[1]: sshd@3-10.200.20.16:22-10.200.16.10:46874.service: Deactivated successfully. Jun 20 18:26:05.895360 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:26:05.895869 systemd-logind[1872]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:26:05.896961 systemd-logind[1872]: Removed session 6. Jun 20 18:26:05.971384 systemd[1]: Started sshd@4-10.200.20.16:22-10.200.16.10:46876.service - OpenSSH per-connection server daemon (10.200.16.10:46876). Jun 20 18:26:06.423648 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 46876 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:06.424714 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:06.428244 systemd-logind[1872]: New session 7 of user core. Jun 20 18:26:06.440187 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:26:06.880030 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:26:06.880262 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:26:06.924719 sudo[2333]: pam_unix(sudo:session): session closed for user root Jun 20 18:26:06.995692 sshd[2332]: Connection closed by 10.200.16.10 port 46876 Jun 20 18:26:06.996321 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:06.999428 systemd[1]: sshd@4-10.200.20.16:22-10.200.16.10:46876.service: Deactivated successfully. Jun 20 18:26:07.000895 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:26:07.002015 systemd-logind[1872]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:26:07.002945 systemd-logind[1872]: Removed session 7. Jun 20 18:26:07.076377 systemd[1]: Started sshd@5-10.200.20.16:22-10.200.16.10:46886.service - OpenSSH per-connection server daemon (10.200.16.10:46886). Jun 20 18:26:07.528909 sshd[2339]: Accepted publickey for core from 10.200.16.10 port 46886 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:07.530001 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:07.533509 systemd-logind[1872]: New session 8 of user core. Jun 20 18:26:07.544181 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:26:07.783169 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:26:07.783372 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:26:07.790667 sudo[2343]: pam_unix(sudo:session): session closed for user root Jun 20 18:26:07.793890 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:26:07.794088 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:26:07.799953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:26:07.828607 augenrules[2365]: No rules Jun 20 18:26:07.829745 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:26:07.830024 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:26:07.830963 sudo[2342]: pam_unix(sudo:session): session closed for user root Jun 20 18:26:07.907727 sshd[2341]: Connection closed by 10.200.16.10 port 46886 Jun 20 18:26:07.908442 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:07.911914 systemd[1]: sshd@5-10.200.20.16:22-10.200.16.10:46886.service: Deactivated successfully. Jun 20 18:26:07.913322 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:26:07.913901 systemd-logind[1872]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:26:07.914956 systemd-logind[1872]: Removed session 8. Jun 20 18:26:07.993404 systemd[1]: Started sshd@6-10.200.20.16:22-10.200.16.10:46900.service - OpenSSH per-connection server daemon (10.200.16.10:46900). Jun 20 18:26:08.453751 sshd[2374]: Accepted publickey for core from 10.200.16.10 port 46900 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:08.454748 sshd-session[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:08.458540 systemd-logind[1872]: New session 9 of user core. Jun 20 18:26:08.468193 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:26:08.707852 sudo[2377]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:26:08.708057 sudo[2377]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:26:10.797868 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:26:10.808302 (dockerd)[2395]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:26:12.354101 dockerd[2395]: time="2025-06-20T18:26:12.353423790Z" level=info msg="Starting up" Jun 20 18:26:12.354912 dockerd[2395]: time="2025-06-20T18:26:12.354890662Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 18:26:12.422354 systemd[1]: var-lib-docker-metacopy\x2dcheck3159667856-merged.mount: Deactivated successfully. Jun 20 18:26:12.438636 dockerd[2395]: time="2025-06-20T18:26:12.438586326Z" level=info msg="Loading containers: start." Jun 20 18:26:12.452108 kernel: Initializing XFRM netlink socket Jun 20 18:26:12.932506 systemd-networkd[1484]: docker0: Link UP Jun 20 18:26:12.950043 dockerd[2395]: time="2025-06-20T18:26:12.949956982Z" level=info msg="Loading containers: done." Jun 20 18:26:12.975631 dockerd[2395]: time="2025-06-20T18:26:12.975589998Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:26:12.975750 dockerd[2395]: time="2025-06-20T18:26:12.975664262Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 18:26:12.975770 dockerd[2395]: time="2025-06-20T18:26:12.975764766Z" level=info msg="Initializing buildkit" Jun 20 18:26:13.055143 dockerd[2395]: time="2025-06-20T18:26:13.055104622Z" level=info msg="Completed buildkit initialization" Jun 20 18:26:13.059982 dockerd[2395]: time="2025-06-20T18:26:13.059944558Z" level=info msg="Daemon has completed initialization" Jun 20 18:26:13.060131 dockerd[2395]: time="2025-06-20T18:26:13.059993862Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:26:13.060500 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:26:13.548965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:26:13.550549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:13.661864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:13.664225 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:26:13.750228 kubelet[2602]: E0620 18:26:13.750187 2602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:26:13.752025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:26:13.752234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:26:13.752671 systemd[1]: kubelet.service: Consumed 100ms CPU time, 105.5M memory peak. Jun 20 18:26:13.833201 containerd[1891]: time="2025-06-20T18:26:13.832913120Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 18:26:15.003873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096934493.mount: Deactivated successfully. Jun 20 18:26:16.069190 containerd[1891]: time="2025-06-20T18:26:16.069139459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:16.073097 containerd[1891]: time="2025-06-20T18:26:16.072938587Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jun 20 18:26:16.084321 containerd[1891]: time="2025-06-20T18:26:16.084276969Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:16.088170 containerd[1891]: time="2025-06-20T18:26:16.088111529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:16.088816 containerd[1891]: time="2025-06-20T18:26:16.088629490Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.255680919s" Jun 20 18:26:16.088816 containerd[1891]: time="2025-06-20T18:26:16.088660075Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jun 20 18:26:16.089710 containerd[1891]: time="2025-06-20T18:26:16.089692228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 18:26:17.461025 containerd[1891]: time="2025-06-20T18:26:17.460970417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:17.465957 containerd[1891]: time="2025-06-20T18:26:17.465934396Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jun 20 18:26:17.473343 containerd[1891]: time="2025-06-20T18:26:17.473321368Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:17.480251 containerd[1891]: time="2025-06-20T18:26:17.480204644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:17.480878 containerd[1891]: time="2025-06-20T18:26:17.480628483Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.390843918s" Jun 20 18:26:17.480878 containerd[1891]: time="2025-06-20T18:26:17.480654723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jun 20 18:26:17.481134 containerd[1891]: time="2025-06-20T18:26:17.481115867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 18:26:18.644240 containerd[1891]: time="2025-06-20T18:26:18.644175222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:18.646878 containerd[1891]: time="2025-06-20T18:26:18.646848058Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jun 20 18:26:18.651395 containerd[1891]: time="2025-06-20T18:26:18.651354454Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:18.656329 containerd[1891]: time="2025-06-20T18:26:18.656292273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:18.656909 containerd[1891]: time="2025-06-20T18:26:18.656742993Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.175549077s" Jun 20 18:26:18.656909 containerd[1891]: time="2025-06-20T18:26:18.656771689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jun 20 18:26:18.657240 containerd[1891]: time="2025-06-20T18:26:18.657209809Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 18:26:20.156396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489848564.mount: Deactivated successfully. Jun 20 18:26:20.399341 containerd[1891]: time="2025-06-20T18:26:20.399286978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:20.403520 containerd[1891]: time="2025-06-20T18:26:20.403484361Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jun 20 18:26:20.406617 containerd[1891]: time="2025-06-20T18:26:20.406576957Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:20.411352 containerd[1891]: time="2025-06-20T18:26:20.411237587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:20.411808 containerd[1891]: time="2025-06-20T18:26:20.411693107Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.754457362s" Jun 20 18:26:20.411808 containerd[1891]: time="2025-06-20T18:26:20.411720859Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jun 20 18:26:20.412160 containerd[1891]: time="2025-06-20T18:26:20.412135410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 18:26:20.821316 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 20 18:26:21.236593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2234784594.mount: Deactivated successfully. Jun 20 18:26:22.224099 containerd[1891]: time="2025-06-20T18:26:22.223820331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:22.227397 containerd[1891]: time="2025-06-20T18:26:22.227367632Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jun 20 18:26:22.231479 containerd[1891]: time="2025-06-20T18:26:22.231439087Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:22.236186 containerd[1891]: time="2025-06-20T18:26:22.236148186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:22.236870 containerd[1891]: time="2025-06-20T18:26:22.236763702Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.824601195s" Jun 20 18:26:22.236870 containerd[1891]: time="2025-06-20T18:26:22.236793423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jun 20 18:26:22.237403 containerd[1891]: time="2025-06-20T18:26:22.237344482Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:26:22.895593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount863079027.mount: Deactivated successfully. Jun 20 18:26:22.929107 containerd[1891]: time="2025-06-20T18:26:22.928740632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:26:22.932635 containerd[1891]: time="2025-06-20T18:26:22.932612759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 18:26:22.937139 containerd[1891]: time="2025-06-20T18:26:22.937117620Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:26:22.942107 containerd[1891]: time="2025-06-20T18:26:22.942082880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:26:22.942499 containerd[1891]: time="2025-06-20T18:26:22.942372522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 704.890219ms" Jun 20 18:26:22.942499 containerd[1891]: time="2025-06-20T18:26:22.942402763Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 18:26:22.942932 containerd[1891]: time="2025-06-20T18:26:22.942912179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 18:26:23.660269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount145975663.mount: Deactivated successfully. Jun 20 18:26:23.798895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 18:26:23.801221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:23.886756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:23.889185 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:26:24.007108 kubelet[2751]: E0620 18:26:24.006263 2751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:26:24.008186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:26:24.008298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:26:24.008577 systemd[1]: kubelet.service: Consumed 101ms CPU time, 105.6M memory peak. Jun 20 18:26:26.953506 containerd[1891]: time="2025-06-20T18:26:26.953450917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:26.957651 containerd[1891]: time="2025-06-20T18:26:26.957458153Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jun 20 18:26:26.961931 containerd[1891]: time="2025-06-20T18:26:26.961909647Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:26.967261 containerd[1891]: time="2025-06-20T18:26:26.967231702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:26.967920 containerd[1891]: time="2025-06-20T18:26:26.967891938Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.024957374s" Jun 20 18:26:26.968000 containerd[1891]: time="2025-06-20T18:26:26.967987204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jun 20 18:26:27.521208 update_engine[1877]: I20250620 18:26:27.521150 1877 update_attempter.cc:509] Updating boot flags... Jun 20 18:26:29.225308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:29.225632 systemd[1]: kubelet.service: Consumed 101ms CPU time, 105.6M memory peak. Jun 20 18:26:29.227531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:29.245911 systemd[1]: Reload requested from client PID 2894 ('systemctl') (unit session-9.scope)... Jun 20 18:26:29.245926 systemd[1]: Reloading... Jun 20 18:26:29.335181 zram_generator::config[2940]: No configuration found. Jun 20 18:26:29.402326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:26:29.485049 systemd[1]: Reloading finished in 238 ms. Jun 20 18:26:29.537781 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 18:26:29.537845 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 18:26:29.538104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:29.538159 systemd[1]: kubelet.service: Consumed 72ms CPU time, 95M memory peak. Jun 20 18:26:29.539424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:29.809910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:29.819281 (kubelet)[3008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:26:29.842084 kubelet[3008]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:26:29.842084 kubelet[3008]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 18:26:29.842084 kubelet[3008]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:26:29.842084 kubelet[3008]: I0620 18:26:29.841934 3008 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:26:30.063132 kubelet[3008]: I0620 18:26:30.061968 3008 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 18:26:30.063132 kubelet[3008]: I0620 18:26:30.062337 3008 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:26:30.063132 kubelet[3008]: I0620 18:26:30.062542 3008 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 18:26:30.075665 kubelet[3008]: E0620 18:26:30.075622 3008 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:26:30.076299 kubelet[3008]: I0620 18:26:30.076193 3008 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:26:30.081019 kubelet[3008]: I0620 18:26:30.080999 3008 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:26:30.084589 kubelet[3008]: I0620 18:26:30.084440 3008 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:26:30.084879 kubelet[3008]: I0620 18:26:30.084865 3008 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 18:26:30.085046 kubelet[3008]: I0620 18:26:30.085026 3008 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:26:30.085281 kubelet[3008]: I0620 18:26:30.085111 3008 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-c937e4b650","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:26:30.085413 kubelet[3008]: I0620 18:26:30.085402 3008 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:26:30.085461 kubelet[3008]: I0620 18:26:30.085454 3008 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 18:26:30.085605 kubelet[3008]: I0620 18:26:30.085594 3008 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:26:30.087011 kubelet[3008]: I0620 18:26:30.086991 3008 kubelet.go:408] "Attempting to sync node with API server" Jun 20 18:26:30.087083 kubelet[3008]: I0620 18:26:30.087019 3008 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:26:30.087083 kubelet[3008]: I0620 18:26:30.087036 3008 kubelet.go:314] "Adding apiserver pod source" Jun 20 18:26:30.087083 kubelet[3008]: I0620 18:26:30.087048 3008 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:26:30.089659 kubelet[3008]: W0620 18:26:30.089622 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-c937e4b650&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Jun 20 18:26:30.089760 kubelet[3008]: E0620 18:26:30.089744 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-c937e4b650&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:26:30.090119 kubelet[3008]: W0620 18:26:30.090095 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Jun 20 18:26:30.090212 kubelet[3008]: E0620 18:26:30.090201 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:26:30.090336 kubelet[3008]: I0620 18:26:30.090324 3008 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:26:30.090685 kubelet[3008]: I0620 18:26:30.090670 3008 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:26:30.090795 kubelet[3008]: W0620 18:26:30.090785 3008 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:26:30.091459 kubelet[3008]: I0620 18:26:30.091442 3008 server.go:1274] "Started kubelet" Jun 20 18:26:30.093275 kubelet[3008]: I0620 18:26:30.092792 3008 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:26:30.093275 kubelet[3008]: I0620 18:26:30.092989 3008 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:26:30.093275 kubelet[3008]: I0620 18:26:30.093250 3008 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:26:30.093417 kubelet[3008]: I0620 18:26:30.093395 3008 server.go:449] "Adding debug handlers to kubelet server" Jun 20 18:26:30.094108 kubelet[3008]: E0620 18:26:30.093528 3008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.16:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-c937e4b650.184ad388e49de577 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-c937e4b650,UID:ci-4344.1.0-a-c937e4b650,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-c937e4b650,},FirstTimestamp:2025-06-20 18:26:30.091425143 +0000 UTC m=+0.270013582,LastTimestamp:2025-06-20 18:26:30.091425143 +0000 UTC m=+0.270013582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-c937e4b650,}" Jun 20 18:26:30.095669 kubelet[3008]: I0620 18:26:30.095648 3008 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:26:30.096300 kubelet[3008]: I0620 18:26:30.096284 3008 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:26:30.097388 kubelet[3008]: E0620 18:26:30.097372 3008 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:26:30.097548 kubelet[3008]: E0620 18:26:30.097536 3008 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-c937e4b650\" not found" Jun 20 18:26:30.097632 kubelet[3008]: I0620 18:26:30.097624 3008 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 18:26:30.097813 kubelet[3008]: I0620 18:26:30.097798 3008 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 18:26:30.097909 kubelet[3008]: I0620 18:26:30.097901 3008 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:26:30.098963 kubelet[3008]: W0620 18:26:30.098183 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Jun 20 18:26:30.099063 kubelet[3008]: E0620 18:26:30.099049 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:26:30.099291 kubelet[3008]: I0620 18:26:30.099278 3008 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:26:30.099416 kubelet[3008]: I0620 18:26:30.099402 3008 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:26:30.099999 kubelet[3008]: E0620 18:26:30.099912 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-c937e4b650?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="200ms" Jun 20 18:26:30.100847 kubelet[3008]: I0620 18:26:30.100828 3008 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:26:30.110209 kubelet[3008]: I0620 18:26:30.110111 3008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:26:30.110838 kubelet[3008]: I0620 18:26:30.110826 3008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:26:30.110911 kubelet[3008]: I0620 18:26:30.110902 3008 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 18:26:30.110963 kubelet[3008]: I0620 18:26:30.110956 3008 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 18:26:30.111036 kubelet[3008]: E0620 18:26:30.111024 3008 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:26:30.115669 kubelet[3008]: W0620 18:26:30.115498 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Jun 20 18:26:30.115904 kubelet[3008]: E0620 18:26:30.115740 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:26:30.120743 kubelet[3008]: I0620 18:26:30.120719 3008 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 18:26:30.120743 kubelet[3008]: I0620 18:26:30.120729 3008 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 18:26:30.120743 kubelet[3008]: I0620 18:26:30.120744 3008 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:26:30.197849 kubelet[3008]: E0620 18:26:30.197805 3008 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-c937e4b650\" not found" Jun 20 18:26:30.211491 kubelet[3008]: E0620 18:26:30.211469 3008 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:26:30.298683 kubelet[3008]: E0620 18:26:30.298657 3008 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-c937e4b650\" not found" Jun 20 18:26:30.301169 kubelet[3008]: E0620 18:26:30.301132 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-c937e4b650?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="400ms" Jun 20 18:26:30.393311 kubelet[3008]: I0620 18:26:30.392457 3008 policy_none.go:49] "None policy: Start" Jun 20 18:26:30.393311 kubelet[3008]: I0620 18:26:30.393134 3008 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 18:26:30.393311 kubelet[3008]: I0620 18:26:30.393196 3008 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:26:30.398713 kubelet[3008]: E0620 18:26:30.398692 3008 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-c937e4b650\" not found" Jun 20 18:26:30.401562 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:26:30.412197 kubelet[3008]: E0620 18:26:30.412171 3008 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:26:30.421620 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:26:30.424282 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:26:30.436053 kubelet[3008]: I0620 18:26:30.435572 3008 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:26:30.436053 kubelet[3008]: I0620 18:26:30.435721 3008 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:26:30.436053 kubelet[3008]: I0620 18:26:30.435731 3008 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:26:30.436053 kubelet[3008]: I0620 18:26:30.435975 3008 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:26:30.437722 kubelet[3008]: E0620 18:26:30.437704 3008 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-c937e4b650\" not found" Jun 20 18:26:30.537967 kubelet[3008]: I0620 18:26:30.537935 3008 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.538358 kubelet[3008]: E0620 18:26:30.538336 3008 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.701659 kubelet[3008]: E0620 18:26:30.701525 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-c937e4b650?timeout=10s\": dial tcp 10.200.20.16:6443: connect: connection refused" interval="800ms" Jun 20 18:26:30.740047 kubelet[3008]: I0620 18:26:30.740016 3008 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.740369 kubelet[3008]: E0620 18:26:30.740341 3008 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.821148 systemd[1]: Created slice kubepods-burstable-pod69a0d143c037b5ab1401bd999541aec9.slice - libcontainer container kubepods-burstable-pod69a0d143c037b5ab1401bd999541aec9.slice. Jun 20 18:26:30.845303 systemd[1]: Created slice kubepods-burstable-pod5816ec6b8e4027059e1644f311deee33.slice - libcontainer container kubepods-burstable-pod5816ec6b8e4027059e1644f311deee33.slice. Jun 20 18:26:30.866163 systemd[1]: Created slice kubepods-burstable-pod37e5aca3bb57cde1b0fe9484f12cfb8e.slice - libcontainer container kubepods-burstable-pod37e5aca3bb57cde1b0fe9484f12cfb8e.slice. Jun 20 18:26:30.901044 kubelet[3008]: I0620 18:26:30.900867 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37e5aca3bb57cde1b0fe9484f12cfb8e-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-c937e4b650\" (UID: \"37e5aca3bb57cde1b0fe9484f12cfb8e\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.901044 kubelet[3008]: I0620 18:26:30.900896 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69a0d143c037b5ab1401bd999541aec9-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" (UID: \"69a0d143c037b5ab1401bd999541aec9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.901044 kubelet[3008]: I0620 18:26:30.900910 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69a0d143c037b5ab1401bd999541aec9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" (UID: \"69a0d143c037b5ab1401bd999541aec9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.901044 kubelet[3008]: I0620 18:26:30.900923 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.901044 kubelet[3008]: I0620 18:26:30.900935 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.901366 kubelet[3008]: I0620 18:26:30.900945 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69a0d143c037b5ab1401bd999541aec9-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" (UID: \"69a0d143c037b5ab1401bd999541aec9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.901366 kubelet[3008]: I0620 18:26:30.900958 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.901366 kubelet[3008]: I0620 18:26:30.900967 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:30.901366 kubelet[3008]: I0620 18:26:30.900977 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:31.053819 kubelet[3008]: W0620 18:26:31.053775 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Jun 20 18:26:31.053947 kubelet[3008]: E0620 18:26:31.053832 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:26:31.142839 kubelet[3008]: I0620 18:26:31.142812 3008 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:31.143115 kubelet[3008]: E0620 18:26:31.143092 3008 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.16:6443/api/v1/nodes\": dial tcp 10.200.20.16:6443: connect: connection refused" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:31.143285 containerd[1891]: time="2025-06-20T18:26:31.143251375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-c937e4b650,Uid:69a0d143c037b5ab1401bd999541aec9,Namespace:kube-system,Attempt:0,}" Jun 20 18:26:31.165229 containerd[1891]: time="2025-06-20T18:26:31.165029357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-c937e4b650,Uid:5816ec6b8e4027059e1644f311deee33,Namespace:kube-system,Attempt:0,}" Jun 20 18:26:31.168832 containerd[1891]: time="2025-06-20T18:26:31.168810162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-c937e4b650,Uid:37e5aca3bb57cde1b0fe9484f12cfb8e,Namespace:kube-system,Attempt:0,}" Jun 20 18:26:31.207800 kubelet[3008]: W0620 18:26:31.207753 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Jun 20 18:26:31.207879 kubelet[3008]: E0620 18:26:31.207807 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:26:31.216395 kubelet[3008]: W0620 18:26:31.216359 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.16:6443: connect: connection refused Jun 20 18:26:31.216450 kubelet[3008]: E0620 18:26:31.216398 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.16:6443: connect: connection refused" logger="UnhandledError" Jun 20 18:26:31.250083 containerd[1891]: time="2025-06-20T18:26:31.250036039Z" level=info msg="connecting to shim 72e0f7f639c5118113230318901e3484ed58f9af961aa2b2052f94824831fa15" address="unix:///run/containerd/s/3f2b2dd52511fd4b7fca2984d05073f026d9e15429385c481139e7ea9ea04492" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:26:31.273685 containerd[1891]: time="2025-06-20T18:26:31.273627746Z" level=info msg="connecting to shim dfa49e523dfbba3f3d35d4361c56d121128d48578d11b745bee058fae71af59b" address="unix:///run/containerd/s/d2039bdd5c4d717fb15c302161f72cf57e72c7369df3916ce2c00f41d38a1730" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:26:31.274319 systemd[1]: Started cri-containerd-72e0f7f639c5118113230318901e3484ed58f9af961aa2b2052f94824831fa15.scope - libcontainer container 72e0f7f639c5118113230318901e3484ed58f9af961aa2b2052f94824831fa15. Jun 20 18:26:31.282127 containerd[1891]: time="2025-06-20T18:26:31.282100157Z" level=info msg="connecting to shim 238f566f281d7b0f2d307014e1c3d4777381f28f7bd603ce95927e55d84fdb4c" address="unix:///run/containerd/s/e00c5ecfc553d26b858b1a370f9d29ec938dcbaf7c364adf637cead9596210be" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:26:31.300312 systemd[1]: Started cri-containerd-dfa49e523dfbba3f3d35d4361c56d121128d48578d11b745bee058fae71af59b.scope - libcontainer container dfa49e523dfbba3f3d35d4361c56d121128d48578d11b745bee058fae71af59b. Jun 20 18:26:31.304290 systemd[1]: Started cri-containerd-238f566f281d7b0f2d307014e1c3d4777381f28f7bd603ce95927e55d84fdb4c.scope - libcontainer container 238f566f281d7b0f2d307014e1c3d4777381f28f7bd603ce95927e55d84fdb4c. Jun 20 18:26:31.323830 containerd[1891]: time="2025-06-20T18:26:31.323799749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-c937e4b650,Uid:69a0d143c037b5ab1401bd999541aec9,Namespace:kube-system,Attempt:0,} returns sandbox id \"72e0f7f639c5118113230318901e3484ed58f9af961aa2b2052f94824831fa15\"" Jun 20 18:26:31.327891 containerd[1891]: time="2025-06-20T18:26:31.327813358Z" level=info msg="CreateContainer within sandbox \"72e0f7f639c5118113230318901e3484ed58f9af961aa2b2052f94824831fa15\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:26:31.355039 containerd[1891]: time="2025-06-20T18:26:31.355011874Z" level=info msg="Container 10b2b4f4b699059fefabb3ace1f2019d5ca4d41cd4e93269b6655e085749a7d6: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:31.360737 containerd[1891]: time="2025-06-20T18:26:31.360705621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-c937e4b650,Uid:5816ec6b8e4027059e1644f311deee33,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfa49e523dfbba3f3d35d4361c56d121128d48578d11b745bee058fae71af59b\"" Jun 20 18:26:31.362145 containerd[1891]: time="2025-06-20T18:26:31.362104449Z" level=info msg="CreateContainer within sandbox \"dfa49e523dfbba3f3d35d4361c56d121128d48578d11b745bee058fae71af59b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:26:31.364832 containerd[1891]: time="2025-06-20T18:26:31.364805464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-c937e4b650,Uid:37e5aca3bb57cde1b0fe9484f12cfb8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"238f566f281d7b0f2d307014e1c3d4777381f28f7bd603ce95927e55d84fdb4c\"" Jun 20 18:26:31.366194 containerd[1891]: time="2025-06-20T18:26:31.366142155Z" level=info msg="CreateContainer within sandbox \"238f566f281d7b0f2d307014e1c3d4777381f28f7bd603ce95927e55d84fdb4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:26:31.383801 containerd[1891]: time="2025-06-20T18:26:31.383769918Z" level=info msg="CreateContainer within sandbox \"72e0f7f639c5118113230318901e3484ed58f9af961aa2b2052f94824831fa15\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"10b2b4f4b699059fefabb3ace1f2019d5ca4d41cd4e93269b6655e085749a7d6\"" Jun 20 18:26:31.384231 containerd[1891]: time="2025-06-20T18:26:31.384210879Z" level=info msg="StartContainer for \"10b2b4f4b699059fefabb3ace1f2019d5ca4d41cd4e93269b6655e085749a7d6\"" Jun 20 18:26:31.384907 containerd[1891]: time="2025-06-20T18:26:31.384880548Z" level=info msg="connecting to shim 10b2b4f4b699059fefabb3ace1f2019d5ca4d41cd4e93269b6655e085749a7d6" address="unix:///run/containerd/s/3f2b2dd52511fd4b7fca2984d05073f026d9e15429385c481139e7ea9ea04492" protocol=ttrpc version=3 Jun 20 18:26:31.399193 systemd[1]: Started cri-containerd-10b2b4f4b699059fefabb3ace1f2019d5ca4d41cd4e93269b6655e085749a7d6.scope - libcontainer container 10b2b4f4b699059fefabb3ace1f2019d5ca4d41cd4e93269b6655e085749a7d6. Jun 20 18:26:31.449971 containerd[1891]: time="2025-06-20T18:26:31.449874218Z" level=info msg="StartContainer for \"10b2b4f4b699059fefabb3ace1f2019d5ca4d41cd4e93269b6655e085749a7d6\" returns successfully" Jun 20 18:26:31.451330 containerd[1891]: time="2025-06-20T18:26:31.451245942Z" level=info msg="Container e05351c1c59c06c3c6ade54155997f7aeb20bbebac4c239acd2351413a2fdbe5: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:31.460161 containerd[1891]: time="2025-06-20T18:26:31.460056008Z" level=info msg="Container 916afe7cec1a931afbbdf03cdec40315a0e4f0afbf2112637e957b7970b694bf: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:31.945055 kubelet[3008]: I0620 18:26:31.945015 3008 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:32.044542 containerd[1891]: time="2025-06-20T18:26:32.044453298Z" level=info msg="CreateContainer within sandbox \"dfa49e523dfbba3f3d35d4361c56d121128d48578d11b745bee058fae71af59b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e05351c1c59c06c3c6ade54155997f7aeb20bbebac4c239acd2351413a2fdbe5\"" Jun 20 18:26:32.047185 containerd[1891]: time="2025-06-20T18:26:32.047138728Z" level=info msg="StartContainer for \"e05351c1c59c06c3c6ade54155997f7aeb20bbebac4c239acd2351413a2fdbe5\"" Jun 20 18:26:32.048145 containerd[1891]: time="2025-06-20T18:26:32.048105300Z" level=info msg="connecting to shim e05351c1c59c06c3c6ade54155997f7aeb20bbebac4c239acd2351413a2fdbe5" address="unix:///run/containerd/s/d2039bdd5c4d717fb15c302161f72cf57e72c7369df3916ce2c00f41d38a1730" protocol=ttrpc version=3 Jun 20 18:26:32.063196 systemd[1]: Started cri-containerd-e05351c1c59c06c3c6ade54155997f7aeb20bbebac4c239acd2351413a2fdbe5.scope - libcontainer container e05351c1c59c06c3c6ade54155997f7aeb20bbebac4c239acd2351413a2fdbe5. Jun 20 18:26:32.413004 kubelet[3008]: E0620 18:26:32.412963 3008 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.0-a-c937e4b650\" not found" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:32.564239 kubelet[3008]: I0620 18:26:32.564202 3008 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:32.564239 kubelet[3008]: E0620 18:26:32.564240 3008 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4344.1.0-a-c937e4b650\": node \"ci-4344.1.0-a-c937e4b650\" not found" Jun 20 18:26:32.792376 containerd[1891]: time="2025-06-20T18:26:32.789298190Z" level=info msg="CreateContainer within sandbox \"238f566f281d7b0f2d307014e1c3d4777381f28f7bd603ce95927e55d84fdb4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"916afe7cec1a931afbbdf03cdec40315a0e4f0afbf2112637e957b7970b694bf\"" Jun 20 18:26:32.792376 containerd[1891]: time="2025-06-20T18:26:32.791010881Z" level=info msg="StartContainer for \"e05351c1c59c06c3c6ade54155997f7aeb20bbebac4c239acd2351413a2fdbe5\" returns successfully" Jun 20 18:26:32.792376 containerd[1891]: time="2025-06-20T18:26:32.791689870Z" level=info msg="StartContainer for \"916afe7cec1a931afbbdf03cdec40315a0e4f0afbf2112637e957b7970b694bf\"" Jun 20 18:26:32.793731 containerd[1891]: time="2025-06-20T18:26:32.793686991Z" level=info msg="connecting to shim 916afe7cec1a931afbbdf03cdec40315a0e4f0afbf2112637e957b7970b694bf" address="unix:///run/containerd/s/e00c5ecfc553d26b858b1a370f9d29ec938dcbaf7c364adf637cead9596210be" protocol=ttrpc version=3 Jun 20 18:26:32.813836 kubelet[3008]: E0620 18:26:32.813810 3008 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:32.814085 kubelet[3008]: E0620 18:26:32.814052 3008 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:32.816198 systemd[1]: Started cri-containerd-916afe7cec1a931afbbdf03cdec40315a0e4f0afbf2112637e957b7970b694bf.scope - libcontainer container 916afe7cec1a931afbbdf03cdec40315a0e4f0afbf2112637e957b7970b694bf. Jun 20 18:26:32.853575 containerd[1891]: time="2025-06-20T18:26:32.853540989Z" level=info msg="StartContainer for \"916afe7cec1a931afbbdf03cdec40315a0e4f0afbf2112637e957b7970b694bf\" returns successfully" Jun 20 18:26:33.091677 kubelet[3008]: I0620 18:26:33.091644 3008 apiserver.go:52] "Watching apiserver" Jun 20 18:26:33.098215 kubelet[3008]: I0620 18:26:33.098188 3008 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 18:26:33.830285 kubelet[3008]: W0620 18:26:33.830239 3008 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:26:33.831168 kubelet[3008]: W0620 18:26:33.831093 3008 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:26:34.232403 kubelet[3008]: W0620 18:26:34.232177 3008 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:26:34.867367 systemd[1]: Reload requested from client PID 3276 ('systemctl') (unit session-9.scope)... Jun 20 18:26:34.867616 systemd[1]: Reloading... Jun 20 18:26:34.947169 zram_generator::config[3322]: No configuration found. Jun 20 18:26:35.022550 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:26:35.114708 systemd[1]: Reloading finished in 246 ms. Jun 20 18:26:35.141585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:35.154811 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:26:35.154994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:35.155047 systemd[1]: kubelet.service: Consumed 516ms CPU time, 125M memory peak. Jun 20 18:26:35.156512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:35.254019 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:35.257674 (kubelet)[3386]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:26:35.285459 kubelet[3386]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:26:35.285459 kubelet[3386]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 18:26:35.285459 kubelet[3386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:26:35.285459 kubelet[3386]: I0620 18:26:35.285344 3386 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:26:35.290101 kubelet[3386]: I0620 18:26:35.289298 3386 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 18:26:35.290101 kubelet[3386]: I0620 18:26:35.289321 3386 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:26:35.290101 kubelet[3386]: I0620 18:26:35.289469 3386 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 18:26:35.290831 kubelet[3386]: I0620 18:26:35.290426 3386 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 18:26:35.296977 kubelet[3386]: I0620 18:26:35.296952 3386 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:26:35.300829 kubelet[3386]: I0620 18:26:35.300807 3386 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:26:35.303124 kubelet[3386]: I0620 18:26:35.303104 3386 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:26:35.303347 kubelet[3386]: I0620 18:26:35.303207 3386 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 18:26:35.303347 kubelet[3386]: I0620 18:26:35.303287 3386 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:26:35.303423 kubelet[3386]: I0620 18:26:35.303303 3386 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-c937e4b650","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:26:35.303506 kubelet[3386]: I0620 18:26:35.303424 3386 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:26:35.303506 kubelet[3386]: I0620 18:26:35.303431 3386 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 18:26:35.303506 kubelet[3386]: I0620 18:26:35.303457 3386 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:26:35.303552 kubelet[3386]: I0620 18:26:35.303545 3386 kubelet.go:408] "Attempting to sync node with API server" Jun 20 18:26:35.303568 kubelet[3386]: I0620 18:26:35.303554 3386 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:26:35.303568 kubelet[3386]: I0620 18:26:35.303566 3386 kubelet.go:314] "Adding apiserver pod source" Jun 20 18:26:35.303600 kubelet[3386]: I0620 18:26:35.303575 3386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:26:35.306064 kubelet[3386]: I0620 18:26:35.306047 3386 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:26:35.306353 kubelet[3386]: I0620 18:26:35.306332 3386 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 18:26:35.306605 kubelet[3386]: I0620 18:26:35.306588 3386 server.go:1274] "Started kubelet" Jun 20 18:26:35.307779 kubelet[3386]: I0620 18:26:35.307750 3386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:26:35.311922 kubelet[3386]: E0620 18:26:35.311891 3386 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:26:35.312680 kubelet[3386]: I0620 18:26:35.312649 3386 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:26:35.313971 kubelet[3386]: I0620 18:26:35.313942 3386 server.go:449] "Adding debug handlers to kubelet server" Jun 20 18:26:35.315258 kubelet[3386]: I0620 18:26:35.315230 3386 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 18:26:35.315449 kubelet[3386]: E0620 18:26:35.315428 3386 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-c937e4b650\" not found" Jun 20 18:26:35.315840 kubelet[3386]: I0620 18:26:35.315822 3386 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 18:26:35.317019 kubelet[3386]: I0620 18:26:35.316971 3386 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:26:35.317246 kubelet[3386]: I0620 18:26:35.317230 3386 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:26:35.318371 kubelet[3386]: I0620 18:26:35.318349 3386 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:26:35.323063 kubelet[3386]: I0620 18:26:35.323036 3386 factory.go:221] Registration of the systemd container factory successfully Jun 20 18:26:35.323662 kubelet[3386]: I0620 18:26:35.323639 3386 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:26:35.328059 kubelet[3386]: I0620 18:26:35.328010 3386 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:26:35.333348 kubelet[3386]: I0620 18:26:35.333319 3386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 18:26:35.334052 kubelet[3386]: I0620 18:26:35.334030 3386 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 18:26:35.334052 kubelet[3386]: I0620 18:26:35.334051 3386 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 18:26:35.334137 kubelet[3386]: I0620 18:26:35.334081 3386 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 18:26:35.334137 kubelet[3386]: E0620 18:26:35.334112 3386 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:26:35.343866 kubelet[3386]: I0620 18:26:35.343838 3386 factory.go:221] Registration of the containerd container factory successfully Jun 20 18:26:35.382553 kubelet[3386]: I0620 18:26:35.382523 3386 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 18:26:35.382553 kubelet[3386]: I0620 18:26:35.382537 3386 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 18:26:35.382553 kubelet[3386]: I0620 18:26:35.382554 3386 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:26:35.382673 kubelet[3386]: I0620 18:26:35.382656 3386 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:26:35.382673 kubelet[3386]: I0620 18:26:35.382663 3386 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:26:35.382702 kubelet[3386]: I0620 18:26:35.382676 3386 policy_none.go:49] "None policy: Start" Jun 20 18:26:35.383371 kubelet[3386]: I0620 18:26:35.383358 3386 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 18:26:35.383460 kubelet[3386]: I0620 18:26:35.383452 3386 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:26:35.383668 kubelet[3386]: I0620 18:26:35.383620 3386 state_mem.go:75] "Updated machine memory state" Jun 20 18:26:35.387520 kubelet[3386]: I0620 18:26:35.387503 3386 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 18:26:35.387647 kubelet[3386]: I0620 18:26:35.387634 3386 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:26:35.387688 kubelet[3386]: I0620 18:26:35.387646 3386 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:26:35.391129 kubelet[3386]: I0620 18:26:35.391040 3386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:26:35.461303 kubelet[3386]: W0620 18:26:35.460799 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:26:35.461303 kubelet[3386]: E0620 18:26:35.460845 3386 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4344.1.0-a-c937e4b650\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.461930 kubelet[3386]: W0620 18:26:35.461406 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:26:35.461930 kubelet[3386]: E0620 18:26:35.461531 3386 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.461930 kubelet[3386]: W0620 18:26:35.461975 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:26:35.462430 kubelet[3386]: E0620 18:26:35.462020 3386 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.490292 kubelet[3386]: I0620 18:26:35.490259 3386 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.503177 kubelet[3386]: I0620 18:26:35.503156 3386 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.503248 kubelet[3386]: I0620 18:26:35.503216 3386 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518383 kubelet[3386]: I0620 18:26:35.518318 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69a0d143c037b5ab1401bd999541aec9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" (UID: \"69a0d143c037b5ab1401bd999541aec9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518383 kubelet[3386]: I0620 18:26:35.518346 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518383 kubelet[3386]: I0620 18:26:35.518360 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518383 kubelet[3386]: I0620 18:26:35.518371 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/37e5aca3bb57cde1b0fe9484f12cfb8e-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-c937e4b650\" (UID: \"37e5aca3bb57cde1b0fe9484f12cfb8e\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518383 kubelet[3386]: I0620 18:26:35.518381 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69a0d143c037b5ab1401bd999541aec9-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" (UID: \"69a0d143c037b5ab1401bd999541aec9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518520 kubelet[3386]: I0620 18:26:35.518390 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69a0d143c037b5ab1401bd999541aec9-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" (UID: \"69a0d143c037b5ab1401bd999541aec9\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518520 kubelet[3386]: I0620 18:26:35.518399 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518520 kubelet[3386]: I0620 18:26:35.518408 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:35.518520 kubelet[3386]: I0620 18:26:35.518417 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5816ec6b8e4027059e1644f311deee33-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-c937e4b650\" (UID: \"5816ec6b8e4027059e1644f311deee33\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:36.306911 kubelet[3386]: I0620 18:26:36.305941 3386 apiserver.go:52] "Watching apiserver" Jun 20 18:26:36.316300 kubelet[3386]: I0620 18:26:36.316260 3386 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 18:26:36.373104 kubelet[3386]: W0620 18:26:36.372928 3386 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 18:26:36.373921 kubelet[3386]: E0620 18:26:36.373876 3386 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.0-a-c937e4b650\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" Jun 20 18:26:36.390399 kubelet[3386]: I0620 18:26:36.390304 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-c937e4b650" podStartSLOduration=3.390294208 podStartE2EDuration="3.390294208s" podCreationTimestamp="2025-06-20 18:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:26:36.380782608 +0000 UTC m=+1.120399758" watchObservedRunningTime="2025-06-20 18:26:36.390294208 +0000 UTC m=+1.129911350" Jun 20 18:26:36.400934 kubelet[3386]: I0620 18:26:36.400885 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-c937e4b650" podStartSLOduration=3.400876806 podStartE2EDuration="3.400876806s" podCreationTimestamp="2025-06-20 18:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:26:36.39049378 +0000 UTC m=+1.130110922" watchObservedRunningTime="2025-06-20 18:26:36.400876806 +0000 UTC m=+1.140493948" Jun 20 18:26:36.401042 kubelet[3386]: I0620 18:26:36.400952 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-c937e4b650" podStartSLOduration=2.400947167 podStartE2EDuration="2.400947167s" podCreationTimestamp="2025-06-20 18:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:26:36.400699274 +0000 UTC m=+1.140316416" watchObservedRunningTime="2025-06-20 18:26:36.400947167 +0000 UTC m=+1.140564317" Jun 20 18:26:41.144214 kubelet[3386]: I0620 18:26:41.144164 3386 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:26:41.144888 containerd[1891]: time="2025-06-20T18:26:41.144861182Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:26:41.146165 kubelet[3386]: I0620 18:26:41.146145 3386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:26:42.087808 systemd[1]: Created slice kubepods-besteffort-pod49bf1aa9_220f_431e_92da_6b8c6052bcd6.slice - libcontainer container kubepods-besteffort-pod49bf1aa9_220f_431e_92da_6b8c6052bcd6.slice. Jun 20 18:26:42.160637 kubelet[3386]: I0620 18:26:42.160600 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49bf1aa9-220f-431e-92da-6b8c6052bcd6-kube-proxy\") pod \"kube-proxy-9vzbk\" (UID: \"49bf1aa9-220f-431e-92da-6b8c6052bcd6\") " pod="kube-system/kube-proxy-9vzbk" Jun 20 18:26:42.161010 kubelet[3386]: I0620 18:26:42.160993 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49bf1aa9-220f-431e-92da-6b8c6052bcd6-lib-modules\") pod \"kube-proxy-9vzbk\" (UID: \"49bf1aa9-220f-431e-92da-6b8c6052bcd6\") " pod="kube-system/kube-proxy-9vzbk" Jun 20 18:26:42.161114 kubelet[3386]: I0620 18:26:42.161102 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49bf1aa9-220f-431e-92da-6b8c6052bcd6-xtables-lock\") pod \"kube-proxy-9vzbk\" (UID: \"49bf1aa9-220f-431e-92da-6b8c6052bcd6\") " pod="kube-system/kube-proxy-9vzbk" Jun 20 18:26:42.161246 kubelet[3386]: I0620 18:26:42.161151 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqpt2\" (UniqueName: \"kubernetes.io/projected/49bf1aa9-220f-431e-92da-6b8c6052bcd6-kube-api-access-kqpt2\") pod \"kube-proxy-9vzbk\" (UID: \"49bf1aa9-220f-431e-92da-6b8c6052bcd6\") " pod="kube-system/kube-proxy-9vzbk" Jun 20 18:26:42.245113 systemd[1]: Created slice kubepods-besteffort-pod5ca8a8e9_91b9_4c24_80be_5c950329ec9a.slice - libcontainer container kubepods-besteffort-pod5ca8a8e9_91b9_4c24_80be_5c950329ec9a.slice. Jun 20 18:26:42.262110 kubelet[3386]: I0620 18:26:42.261690 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ca8a8e9-91b9-4c24-80be-5c950329ec9a-var-lib-calico\") pod \"tigera-operator-6c78c649f6-wdk7t\" (UID: \"5ca8a8e9-91b9-4c24-80be-5c950329ec9a\") " pod="tigera-operator/tigera-operator-6c78c649f6-wdk7t" Jun 20 18:26:42.262110 kubelet[3386]: I0620 18:26:42.261720 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlg7\" (UniqueName: \"kubernetes.io/projected/5ca8a8e9-91b9-4c24-80be-5c950329ec9a-kube-api-access-fzlg7\") pod \"tigera-operator-6c78c649f6-wdk7t\" (UID: \"5ca8a8e9-91b9-4c24-80be-5c950329ec9a\") " pod="tigera-operator/tigera-operator-6c78c649f6-wdk7t" Jun 20 18:26:42.395604 containerd[1891]: time="2025-06-20T18:26:42.395436628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vzbk,Uid:49bf1aa9-220f-431e-92da-6b8c6052bcd6,Namespace:kube-system,Attempt:0,}" Jun 20 18:26:42.452629 containerd[1891]: time="2025-06-20T18:26:42.452585503Z" level=info msg="connecting to shim 68946d03a1813104041fa86725dfa7334f9b8aad96ef7d43716926a844c31f94" address="unix:///run/containerd/s/0d87f3dd0f5f178cccecfca4b02816efc75ca355ccaa435bf77dde87cc6146e9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:26:42.473238 systemd[1]: Started cri-containerd-68946d03a1813104041fa86725dfa7334f9b8aad96ef7d43716926a844c31f94.scope - libcontainer container 68946d03a1813104041fa86725dfa7334f9b8aad96ef7d43716926a844c31f94. Jun 20 18:26:42.494647 containerd[1891]: time="2025-06-20T18:26:42.494611407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9vzbk,Uid:49bf1aa9-220f-431e-92da-6b8c6052bcd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"68946d03a1813104041fa86725dfa7334f9b8aad96ef7d43716926a844c31f94\"" Jun 20 18:26:42.502161 containerd[1891]: time="2025-06-20T18:26:42.502056610Z" level=info msg="CreateContainer within sandbox \"68946d03a1813104041fa86725dfa7334f9b8aad96ef7d43716926a844c31f94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:26:42.529782 containerd[1891]: time="2025-06-20T18:26:42.529747560Z" level=info msg="Container 65e4e6c36ef15ce10121f418714bfd33d5d10101847408c78dbdb28bf6472087: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:42.548015 containerd[1891]: time="2025-06-20T18:26:42.547940671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-wdk7t,Uid:5ca8a8e9-91b9-4c24-80be-5c950329ec9a,Namespace:tigera-operator,Attempt:0,}" Jun 20 18:26:42.549384 containerd[1891]: time="2025-06-20T18:26:42.549305332Z" level=info msg="CreateContainer within sandbox \"68946d03a1813104041fa86725dfa7334f9b8aad96ef7d43716926a844c31f94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65e4e6c36ef15ce10121f418714bfd33d5d10101847408c78dbdb28bf6472087\"" Jun 20 18:26:42.549915 containerd[1891]: time="2025-06-20T18:26:42.549881465Z" level=info msg="StartContainer for \"65e4e6c36ef15ce10121f418714bfd33d5d10101847408c78dbdb28bf6472087\"" Jun 20 18:26:42.551758 containerd[1891]: time="2025-06-20T18:26:42.551719289Z" level=info msg="connecting to shim 65e4e6c36ef15ce10121f418714bfd33d5d10101847408c78dbdb28bf6472087" address="unix:///run/containerd/s/0d87f3dd0f5f178cccecfca4b02816efc75ca355ccaa435bf77dde87cc6146e9" protocol=ttrpc version=3 Jun 20 18:26:42.567196 systemd[1]: Started cri-containerd-65e4e6c36ef15ce10121f418714bfd33d5d10101847408c78dbdb28bf6472087.scope - libcontainer container 65e4e6c36ef15ce10121f418714bfd33d5d10101847408c78dbdb28bf6472087. Jun 20 18:26:42.600798 containerd[1891]: time="2025-06-20T18:26:42.600724754Z" level=info msg="StartContainer for \"65e4e6c36ef15ce10121f418714bfd33d5d10101847408c78dbdb28bf6472087\" returns successfully" Jun 20 18:26:42.611237 containerd[1891]: time="2025-06-20T18:26:42.611200295Z" level=info msg="connecting to shim 44f7863a7bef2ca803f277356838c5626ca955ce243a13bbb623a35ba97bf498" address="unix:///run/containerd/s/0b4b048399a15d4c6ae855193f1013b27b9e625523decd9a257a5af58bb3e940" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:26:42.629188 systemd[1]: Started cri-containerd-44f7863a7bef2ca803f277356838c5626ca955ce243a13bbb623a35ba97bf498.scope - libcontainer container 44f7863a7bef2ca803f277356838c5626ca955ce243a13bbb623a35ba97bf498. Jun 20 18:26:42.662100 containerd[1891]: time="2025-06-20T18:26:42.661496556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-wdk7t,Uid:5ca8a8e9-91b9-4c24-80be-5c950329ec9a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"44f7863a7bef2ca803f277356838c5626ca955ce243a13bbb623a35ba97bf498\"" Jun 20 18:26:42.664373 containerd[1891]: time="2025-06-20T18:26:42.664343083Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 18:26:45.617897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215102030.mount: Deactivated successfully. Jun 20 18:26:45.942111 containerd[1891]: time="2025-06-20T18:26:45.941608350Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:45.944450 containerd[1891]: time="2025-06-20T18:26:45.944422879Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=22149772" Jun 20 18:26:45.947502 containerd[1891]: time="2025-06-20T18:26:45.947464277Z" level=info msg="ImageCreate event name:\"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:45.953094 containerd[1891]: time="2025-06-20T18:26:45.952498090Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:45.953094 containerd[1891]: time="2025-06-20T18:26:45.952974100Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"22145767\" in 3.288598897s" Jun 20 18:26:45.953094 containerd[1891]: time="2025-06-20T18:26:45.952997148Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\"" Jun 20 18:26:45.955853 containerd[1891]: time="2025-06-20T18:26:45.955734932Z" level=info msg="CreateContainer within sandbox \"44f7863a7bef2ca803f277356838c5626ca955ce243a13bbb623a35ba97bf498\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 18:26:45.985087 containerd[1891]: time="2025-06-20T18:26:45.984198108Z" level=info msg="Container 8800a0f08e47a63b215f8688cec8a0d2dfe554cf5f709167a0cea9e790a48e17: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:46.000281 containerd[1891]: time="2025-06-20T18:26:46.000249305Z" level=info msg="CreateContainer within sandbox \"44f7863a7bef2ca803f277356838c5626ca955ce243a13bbb623a35ba97bf498\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8800a0f08e47a63b215f8688cec8a0d2dfe554cf5f709167a0cea9e790a48e17\"" Jun 20 18:26:46.000863 containerd[1891]: time="2025-06-20T18:26:46.000734707Z" level=info msg="StartContainer for \"8800a0f08e47a63b215f8688cec8a0d2dfe554cf5f709167a0cea9e790a48e17\"" Jun 20 18:26:46.002220 containerd[1891]: time="2025-06-20T18:26:46.002165200Z" level=info msg="connecting to shim 8800a0f08e47a63b215f8688cec8a0d2dfe554cf5f709167a0cea9e790a48e17" address="unix:///run/containerd/s/0b4b048399a15d4c6ae855193f1013b27b9e625523decd9a257a5af58bb3e940" protocol=ttrpc version=3 Jun 20 18:26:46.018240 systemd[1]: Started cri-containerd-8800a0f08e47a63b215f8688cec8a0d2dfe554cf5f709167a0cea9e790a48e17.scope - libcontainer container 8800a0f08e47a63b215f8688cec8a0d2dfe554cf5f709167a0cea9e790a48e17. Jun 20 18:26:46.043483 containerd[1891]: time="2025-06-20T18:26:46.043434819Z" level=info msg="StartContainer for \"8800a0f08e47a63b215f8688cec8a0d2dfe554cf5f709167a0cea9e790a48e17\" returns successfully" Jun 20 18:26:46.394570 kubelet[3386]: I0620 18:26:46.394527 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9vzbk" podStartSLOduration=4.394512509 podStartE2EDuration="4.394512509s" podCreationTimestamp="2025-06-20 18:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:26:43.403454205 +0000 UTC m=+8.143071379" watchObservedRunningTime="2025-06-20 18:26:46.394512509 +0000 UTC m=+11.134129659" Jun 20 18:26:50.139652 kubelet[3386]: I0620 18:26:50.139593 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6c78c649f6-wdk7t" podStartSLOduration=4.848584075 podStartE2EDuration="8.139580183s" podCreationTimestamp="2025-06-20 18:26:42 +0000 UTC" firstStartedPulling="2025-06-20 18:26:42.662761696 +0000 UTC m=+7.402378838" lastFinishedPulling="2025-06-20 18:26:45.953757804 +0000 UTC m=+10.693374946" observedRunningTime="2025-06-20 18:26:46.396285049 +0000 UTC m=+11.135902199" watchObservedRunningTime="2025-06-20 18:26:50.139580183 +0000 UTC m=+14.879197325" Jun 20 18:26:50.951789 sudo[2377]: pam_unix(sudo:session): session closed for user root Jun 20 18:26:51.030378 sshd[2376]: Connection closed by 10.200.16.10 port 46900 Jun 20 18:26:51.030839 sshd-session[2374]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:51.034832 systemd[1]: sshd@6-10.200.20.16:22-10.200.16.10:46900.service: Deactivated successfully. Jun 20 18:26:51.038978 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:26:51.040347 systemd[1]: session-9.scope: Consumed 2.962s CPU time, 223.1M memory peak. Jun 20 18:26:51.043996 systemd-logind[1872]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:26:51.046408 systemd-logind[1872]: Removed session 9. Jun 20 18:26:54.663402 systemd[1]: Created slice kubepods-besteffort-pod2c97018b_c299_4dac_83c4_822223d3de6c.slice - libcontainer container kubepods-besteffort-pod2c97018b_c299_4dac_83c4_822223d3de6c.slice. Jun 20 18:26:54.728252 kubelet[3386]: I0620 18:26:54.728168 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmllf\" (UniqueName: \"kubernetes.io/projected/2c97018b-c299-4dac-83c4-822223d3de6c-kube-api-access-wmllf\") pod \"calico-typha-57948c4df7-x47pz\" (UID: \"2c97018b-c299-4dac-83c4-822223d3de6c\") " pod="calico-system/calico-typha-57948c4df7-x47pz" Jun 20 18:26:54.728252 kubelet[3386]: I0620 18:26:54.728201 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c97018b-c299-4dac-83c4-822223d3de6c-tigera-ca-bundle\") pod \"calico-typha-57948c4df7-x47pz\" (UID: \"2c97018b-c299-4dac-83c4-822223d3de6c\") " pod="calico-system/calico-typha-57948c4df7-x47pz" Jun 20 18:26:54.728252 kubelet[3386]: I0620 18:26:54.728212 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c97018b-c299-4dac-83c4-822223d3de6c-typha-certs\") pod \"calico-typha-57948c4df7-x47pz\" (UID: \"2c97018b-c299-4dac-83c4-822223d3de6c\") " pod="calico-system/calico-typha-57948c4df7-x47pz" Jun 20 18:26:54.785408 systemd[1]: Created slice kubepods-besteffort-pod279da8fe_58aa_4ee6_8613_03779b9ba461.slice - libcontainer container kubepods-besteffort-pod279da8fe_58aa_4ee6_8613_03779b9ba461.slice. Jun 20 18:26:54.828371 kubelet[3386]: I0620 18:26:54.828273 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-lib-modules\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828371 kubelet[3386]: I0620 18:26:54.828320 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-cni-log-dir\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828371 kubelet[3386]: I0620 18:26:54.828340 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-flexvol-driver-host\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828371 kubelet[3386]: I0620 18:26:54.828350 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/279da8fe-58aa-4ee6-8613-03779b9ba461-node-certs\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828371 kubelet[3386]: I0620 18:26:54.828359 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-cni-net-dir\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828586 kubelet[3386]: I0620 18:26:54.828368 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-policysync\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828586 kubelet[3386]: I0620 18:26:54.828377 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-var-run-calico\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828586 kubelet[3386]: I0620 18:26:54.828388 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-cni-bin-dir\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828586 kubelet[3386]: I0620 18:26:54.828406 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d76fz\" (UniqueName: \"kubernetes.io/projected/279da8fe-58aa-4ee6-8613-03779b9ba461-kube-api-access-d76fz\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828586 kubelet[3386]: I0620 18:26:54.828464 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/279da8fe-58aa-4ee6-8613-03779b9ba461-tigera-ca-bundle\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828662 kubelet[3386]: I0620 18:26:54.828475 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-var-lib-calico\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.828662 kubelet[3386]: I0620 18:26:54.828485 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/279da8fe-58aa-4ee6-8613-03779b9ba461-xtables-lock\") pod \"calico-node-xp6h8\" (UID: \"279da8fe-58aa-4ee6-8613-03779b9ba461\") " pod="calico-system/calico-node-xp6h8" Jun 20 18:26:54.902721 kubelet[3386]: E0620 18:26:54.902680 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh6nt" podUID="d87a4850-5e3c-4d66-a5fc-1cb820fe465f" Jun 20 18:26:54.929745 kubelet[3386]: I0620 18:26:54.929637 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d87a4850-5e3c-4d66-a5fc-1cb820fe465f-registration-dir\") pod \"csi-node-driver-zh6nt\" (UID: \"d87a4850-5e3c-4d66-a5fc-1cb820fe465f\") " pod="calico-system/csi-node-driver-zh6nt" Jun 20 18:26:54.929745 kubelet[3386]: I0620 18:26:54.929699 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d87a4850-5e3c-4d66-a5fc-1cb820fe465f-socket-dir\") pod \"csi-node-driver-zh6nt\" (UID: \"d87a4850-5e3c-4d66-a5fc-1cb820fe465f\") " pod="calico-system/csi-node-driver-zh6nt" Jun 20 18:26:54.929745 kubelet[3386]: I0620 18:26:54.929711 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d87a4850-5e3c-4d66-a5fc-1cb820fe465f-varrun\") pod \"csi-node-driver-zh6nt\" (UID: \"d87a4850-5e3c-4d66-a5fc-1cb820fe465f\") " pod="calico-system/csi-node-driver-zh6nt" Jun 20 18:26:54.929863 kubelet[3386]: I0620 18:26:54.929762 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d87a4850-5e3c-4d66-a5fc-1cb820fe465f-kubelet-dir\") pod \"csi-node-driver-zh6nt\" (UID: \"d87a4850-5e3c-4d66-a5fc-1cb820fe465f\") " pod="calico-system/csi-node-driver-zh6nt" Jun 20 18:26:54.929863 kubelet[3386]: I0620 18:26:54.929772 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4kj\" (UniqueName: \"kubernetes.io/projected/d87a4850-5e3c-4d66-a5fc-1cb820fe465f-kube-api-access-jn4kj\") pod \"csi-node-driver-zh6nt\" (UID: \"d87a4850-5e3c-4d66-a5fc-1cb820fe465f\") " pod="calico-system/csi-node-driver-zh6nt" Jun 20 18:26:54.956087 kubelet[3386]: E0620 18:26:54.955870 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:54.956087 kubelet[3386]: W0620 18:26:54.955910 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:54.956087 kubelet[3386]: E0620 18:26:54.955929 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:54.968773 containerd[1891]: time="2025-06-20T18:26:54.968291571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57948c4df7-x47pz,Uid:2c97018b-c299-4dac-83c4-822223d3de6c,Namespace:calico-system,Attempt:0,}" Jun 20 18:26:55.026415 containerd[1891]: time="2025-06-20T18:26:55.026359265Z" level=info msg="connecting to shim 44c370d1dfa8e0668ed281dc25f11933a99cb129af852e161b8f9b20ebb52e59" address="unix:///run/containerd/s/486a8fc23dd6c17d4b8471ed42792a1abad221c8a9a7374ceb7f066cc4dbe731" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:26:55.031996 kubelet[3386]: E0620 18:26:55.031972 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.031996 kubelet[3386]: W0620 18:26:55.031992 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.032216 kubelet[3386]: E0620 18:26:55.032010 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.032455 kubelet[3386]: E0620 18:26:55.032438 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.032979 kubelet[3386]: W0620 18:26:55.032942 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.032979 kubelet[3386]: E0620 18:26:55.032971 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.033321 kubelet[3386]: E0620 18:26:55.033307 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.033463 kubelet[3386]: W0620 18:26:55.033354 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.033463 kubelet[3386]: E0620 18:26:55.033368 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.034097 kubelet[3386]: E0620 18:26:55.033622 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.034097 kubelet[3386]: W0620 18:26:55.033632 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.034097 kubelet[3386]: E0620 18:26:55.033646 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.034423 kubelet[3386]: E0620 18:26:55.034363 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.034423 kubelet[3386]: W0620 18:26:55.034374 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.034423 kubelet[3386]: E0620 18:26:55.034388 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.034878 kubelet[3386]: E0620 18:26:55.034855 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.034878 kubelet[3386]: W0620 18:26:55.034866 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.035036 kubelet[3386]: E0620 18:26:55.034919 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.035858 kubelet[3386]: E0620 18:26:55.035843 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.036046 kubelet[3386]: W0620 18:26:55.035934 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.036497 kubelet[3386]: E0620 18:26:55.036336 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.036497 kubelet[3386]: E0620 18:26:55.036475 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.036497 kubelet[3386]: W0620 18:26:55.036484 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.036686 kubelet[3386]: E0620 18:26:55.036665 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.036984 kubelet[3386]: E0620 18:26:55.036917 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.038092 kubelet[3386]: W0620 18:26:55.037106 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.038092 kubelet[3386]: E0620 18:26:55.037622 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.038997 kubelet[3386]: E0620 18:26:55.038913 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.038997 kubelet[3386]: W0620 18:26:55.038926 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.038997 kubelet[3386]: E0620 18:26:55.038952 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.039425 kubelet[3386]: E0620 18:26:55.039348 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.039425 kubelet[3386]: W0620 18:26:55.039359 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.039425 kubelet[3386]: E0620 18:26:55.039383 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.040029 kubelet[3386]: E0620 18:26:55.039907 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.040864 kubelet[3386]: W0620 18:26:55.039916 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.040864 kubelet[3386]: E0620 18:26:55.040459 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.041057 kubelet[3386]: E0620 18:26:55.041044 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.041235 kubelet[3386]: W0620 18:26:55.041114 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.041307 kubelet[3386]: E0620 18:26:55.041287 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.041434 kubelet[3386]: E0620 18:26:55.041387 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.041434 kubelet[3386]: W0620 18:26:55.041395 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.041434 kubelet[3386]: E0620 18:26:55.041420 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.041732 kubelet[3386]: E0620 18:26:55.041717 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.042116 kubelet[3386]: W0620 18:26:55.041816 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.042116 kubelet[3386]: E0620 18:26:55.041865 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.042357 kubelet[3386]: E0620 18:26:55.042346 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.042557 kubelet[3386]: W0620 18:26:55.042506 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.042784 kubelet[3386]: E0620 18:26:55.042735 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.043145 kubelet[3386]: E0620 18:26:55.043132 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.043225 kubelet[3386]: W0620 18:26:55.043214 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.043345 kubelet[3386]: E0620 18:26:55.043334 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.043519 kubelet[3386]: E0620 18:26:55.043507 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.043741 kubelet[3386]: W0620 18:26:55.043658 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.043741 kubelet[3386]: E0620 18:26:55.043693 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.044192 kubelet[3386]: E0620 18:26:55.044115 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.044192 kubelet[3386]: W0620 18:26:55.044127 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.044866 kubelet[3386]: E0620 18:26:55.044614 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.045015 kubelet[3386]: E0620 18:26:55.044974 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.045015 kubelet[3386]: W0620 18:26:55.044987 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.046094 kubelet[3386]: E0620 18:26:55.045365 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.046289 kubelet[3386]: E0620 18:26:55.046277 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.046448 kubelet[3386]: W0620 18:26:55.046354 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.046668 kubelet[3386]: E0620 18:26:55.046600 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.046668 kubelet[3386]: W0620 18:26:55.046612 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.046668 kubelet[3386]: E0620 18:26:55.046613 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.046668 kubelet[3386]: E0620 18:26:55.046646 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.046871 kubelet[3386]: E0620 18:26:55.046849 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.046871 kubelet[3386]: W0620 18:26:55.046859 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.047038 kubelet[3386]: E0620 18:26:55.047002 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.047268 kubelet[3386]: E0620 18:26:55.047247 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.047553 kubelet[3386]: W0620 18:26:55.047362 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.047553 kubelet[3386]: E0620 18:26:55.047415 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.049165 kubelet[3386]: E0620 18:26:55.048143 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.049165 kubelet[3386]: W0620 18:26:55.048155 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.049165 kubelet[3386]: E0620 18:26:55.048166 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.061235 systemd[1]: Started cri-containerd-44c370d1dfa8e0668ed281dc25f11933a99cb129af852e161b8f9b20ebb52e59.scope - libcontainer container 44c370d1dfa8e0668ed281dc25f11933a99cb129af852e161b8f9b20ebb52e59. Jun 20 18:26:55.072154 kubelet[3386]: E0620 18:26:55.072135 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:55.072243 kubelet[3386]: W0620 18:26:55.072232 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:55.072328 kubelet[3386]: E0620 18:26:55.072290 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:55.089610 containerd[1891]: time="2025-06-20T18:26:55.089578428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xp6h8,Uid:279da8fe-58aa-4ee6-8613-03779b9ba461,Namespace:calico-system,Attempt:0,}" Jun 20 18:26:55.110320 containerd[1891]: time="2025-06-20T18:26:55.110280262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57948c4df7-x47pz,Uid:2c97018b-c299-4dac-83c4-822223d3de6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"44c370d1dfa8e0668ed281dc25f11933a99cb129af852e161b8f9b20ebb52e59\"" Jun 20 18:26:55.112326 containerd[1891]: time="2025-06-20T18:26:55.112295182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 18:26:55.141008 containerd[1891]: time="2025-06-20T18:26:55.140883076Z" level=info msg="connecting to shim cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491" address="unix:///run/containerd/s/096730e746df6cb5344ece2d99c852c69ffa0d2e5d18179e0c32058380de8117" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:26:55.162206 systemd[1]: Started cri-containerd-cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491.scope - libcontainer container cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491. Jun 20 18:26:55.184131 containerd[1891]: time="2025-06-20T18:26:55.184015202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xp6h8,Uid:279da8fe-58aa-4ee6-8613-03779b9ba461,Namespace:calico-system,Attempt:0,} returns sandbox id \"cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491\"" Jun 20 18:26:56.335252 kubelet[3386]: E0620 18:26:56.335178 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh6nt" podUID="d87a4850-5e3c-4d66-a5fc-1cb820fe465f" Jun 20 18:26:56.380192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013108175.mount: Deactivated successfully. Jun 20 18:26:57.458007 containerd[1891]: time="2025-06-20T18:26:57.457954235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:57.463129 containerd[1891]: time="2025-06-20T18:26:57.463099025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=33070817" Jun 20 18:26:57.468878 containerd[1891]: time="2025-06-20T18:26:57.468851691Z" level=info msg="ImageCreate event name:\"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:57.475267 containerd[1891]: time="2025-06-20T18:26:57.475215361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:57.475731 containerd[1891]: time="2025-06-20T18:26:57.475424189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"33070671\" in 2.363091478s" Jun 20 18:26:57.475731 containerd[1891]: time="2025-06-20T18:26:57.475447685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\"" Jun 20 18:26:57.476868 containerd[1891]: time="2025-06-20T18:26:57.476836425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 18:26:57.485508 containerd[1891]: time="2025-06-20T18:26:57.485479220Z" level=info msg="CreateContainer within sandbox \"44c370d1dfa8e0668ed281dc25f11933a99cb129af852e161b8f9b20ebb52e59\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 18:26:57.516808 containerd[1891]: time="2025-06-20T18:26:57.516777552Z" level=info msg="Container 13a947d6e44493dd75139aa0324f727add4e68f387081785453974c3f422978c: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:57.536038 containerd[1891]: time="2025-06-20T18:26:57.535972484Z" level=info msg="CreateContainer within sandbox \"44c370d1dfa8e0668ed281dc25f11933a99cb129af852e161b8f9b20ebb52e59\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"13a947d6e44493dd75139aa0324f727add4e68f387081785453974c3f422978c\"" Jun 20 18:26:57.536786 containerd[1891]: time="2025-06-20T18:26:57.536487102Z" level=info msg="StartContainer for \"13a947d6e44493dd75139aa0324f727add4e68f387081785453974c3f422978c\"" Jun 20 18:26:57.540295 containerd[1891]: time="2025-06-20T18:26:57.540260649Z" level=info msg="connecting to shim 13a947d6e44493dd75139aa0324f727add4e68f387081785453974c3f422978c" address="unix:///run/containerd/s/486a8fc23dd6c17d4b8471ed42792a1abad221c8a9a7374ceb7f066cc4dbe731" protocol=ttrpc version=3 Jun 20 18:26:57.563193 systemd[1]: Started cri-containerd-13a947d6e44493dd75139aa0324f727add4e68f387081785453974c3f422978c.scope - libcontainer container 13a947d6e44493dd75139aa0324f727add4e68f387081785453974c3f422978c. Jun 20 18:26:57.596032 containerd[1891]: time="2025-06-20T18:26:57.595976952Z" level=info msg="StartContainer for \"13a947d6e44493dd75139aa0324f727add4e68f387081785453974c3f422978c\" returns successfully" Jun 20 18:26:58.334748 kubelet[3386]: E0620 18:26:58.334710 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh6nt" podUID="d87a4850-5e3c-4d66-a5fc-1cb820fe465f" Jun 20 18:26:58.425060 kubelet[3386]: I0620 18:26:58.424737 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57948c4df7-x47pz" podStartSLOduration=2.060529058 podStartE2EDuration="4.42472543s" podCreationTimestamp="2025-06-20 18:26:54 +0000 UTC" firstStartedPulling="2025-06-20 18:26:55.111888486 +0000 UTC m=+19.851505628" lastFinishedPulling="2025-06-20 18:26:57.476084858 +0000 UTC m=+22.215702000" observedRunningTime="2025-06-20 18:26:58.42431523 +0000 UTC m=+23.163932380" watchObservedRunningTime="2025-06-20 18:26:58.42472543 +0000 UTC m=+23.164342580" Jun 20 18:26:58.438389 kubelet[3386]: E0620 18:26:58.438356 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.438572 kubelet[3386]: W0620 18:26:58.438374 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.438572 kubelet[3386]: E0620 18:26:58.438490 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.438881 kubelet[3386]: E0620 18:26:58.438749 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.438881 kubelet[3386]: W0620 18:26:58.438759 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.438881 kubelet[3386]: E0620 18:26:58.438769 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.439087 kubelet[3386]: E0620 18:26:58.438968 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.439087 kubelet[3386]: W0620 18:26:58.438978 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.439087 kubelet[3386]: E0620 18:26:58.438991 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.439344 kubelet[3386]: E0620 18:26:58.439333 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.439511 kubelet[3386]: W0620 18:26:58.439413 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.439511 kubelet[3386]: E0620 18:26:58.439428 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.439723 kubelet[3386]: E0620 18:26:58.439711 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.439957 kubelet[3386]: W0620 18:26:58.439802 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.439957 kubelet[3386]: E0620 18:26:58.439834 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.440217 kubelet[3386]: E0620 18:26:58.440205 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.440435 kubelet[3386]: W0620 18:26:58.440280 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.440435 kubelet[3386]: E0620 18:26:58.440353 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.440618 kubelet[3386]: E0620 18:26:58.440608 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.440687 kubelet[3386]: W0620 18:26:58.440677 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.440811 kubelet[3386]: E0620 18:26:58.440727 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.441087 kubelet[3386]: E0620 18:26:58.441062 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.441204 kubelet[3386]: W0620 18:26:58.441117 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.441204 kubelet[3386]: E0620 18:26:58.441227 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.441870 kubelet[3386]: E0620 18:26:58.441770 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.441870 kubelet[3386]: W0620 18:26:58.441800 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.441870 kubelet[3386]: E0620 18:26:58.441811 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.442328 kubelet[3386]: E0620 18:26:58.442301 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.442673 kubelet[3386]: W0620 18:26:58.442518 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.442673 kubelet[3386]: E0620 18:26:58.442536 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.442789 kubelet[3386]: E0620 18:26:58.442777 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.442852 kubelet[3386]: W0620 18:26:58.442843 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.442998 kubelet[3386]: E0620 18:26:58.442894 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.443110 kubelet[3386]: E0620 18:26:58.443099 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.443176 kubelet[3386]: W0620 18:26:58.443165 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.443329 kubelet[3386]: E0620 18:26:58.443237 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.443441 kubelet[3386]: E0620 18:26:58.443431 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.443590 kubelet[3386]: W0620 18:26:58.443485 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.443590 kubelet[3386]: E0620 18:26:58.443499 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.443798 kubelet[3386]: E0620 18:26:58.443704 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.443798 kubelet[3386]: W0620 18:26:58.443714 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.443798 kubelet[3386]: E0620 18:26:58.443723 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.443961 kubelet[3386]: E0620 18:26:58.443951 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.444063 kubelet[3386]: W0620 18:26:58.444005 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.444063 kubelet[3386]: E0620 18:26:58.444019 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.461151 kubelet[3386]: E0620 18:26:58.461132 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.461151 kubelet[3386]: W0620 18:26:58.461146 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.461151 kubelet[3386]: E0620 18:26:58.461157 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.461483 kubelet[3386]: E0620 18:26:58.461469 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.461556 kubelet[3386]: W0620 18:26:58.461509 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.461665 kubelet[3386]: E0620 18:26:58.461594 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.461839 kubelet[3386]: E0620 18:26:58.461828 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.461931 kubelet[3386]: W0620 18:26:58.461875 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.462052 kubelet[3386]: E0620 18:26:58.461961 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.462259 kubelet[3386]: E0620 18:26:58.462243 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.462349 kubelet[3386]: W0620 18:26:58.462332 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.462407 kubelet[3386]: E0620 18:26:58.462389 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.462630 kubelet[3386]: E0620 18:26:58.462604 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.462630 kubelet[3386]: W0620 18:26:58.462617 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.462822 kubelet[3386]: E0620 18:26:58.462804 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.462937 kubelet[3386]: E0620 18:26:58.462915 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.462937 kubelet[3386]: W0620 18:26:58.462926 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.463117 kubelet[3386]: E0620 18:26:58.463046 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.463308 kubelet[3386]: E0620 18:26:58.463285 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.463308 kubelet[3386]: W0620 18:26:58.463296 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.463474 kubelet[3386]: E0620 18:26:58.463453 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.463647 kubelet[3386]: E0620 18:26:58.463617 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.463647 kubelet[3386]: W0620 18:26:58.463635 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.463803 kubelet[3386]: E0620 18:26:58.463709 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.463996 kubelet[3386]: E0620 18:26:58.463978 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.464111 kubelet[3386]: W0620 18:26:58.464054 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.464176 kubelet[3386]: E0620 18:26:58.464167 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.464634 kubelet[3386]: E0620 18:26:58.464619 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.464748 kubelet[3386]: W0620 18:26:58.464695 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.464748 kubelet[3386]: E0620 18:26:58.464714 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.464978 kubelet[3386]: E0620 18:26:58.464956 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.464978 kubelet[3386]: W0620 18:26:58.464967 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.465251 kubelet[3386]: E0620 18:26:58.465097 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.465338 kubelet[3386]: E0620 18:26:58.465329 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.465413 kubelet[3386]: W0620 18:26:58.465376 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.465527 kubelet[3386]: E0620 18:26:58.465510 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.465721 kubelet[3386]: E0620 18:26:58.465698 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.465721 kubelet[3386]: W0620 18:26:58.465709 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.465855 kubelet[3386]: E0620 18:26:58.465804 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.466048 kubelet[3386]: E0620 18:26:58.466034 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.466157 kubelet[3386]: W0620 18:26:58.466140 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.466230 kubelet[3386]: E0620 18:26:58.466200 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.466508 kubelet[3386]: E0620 18:26:58.466495 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.466644 kubelet[3386]: W0620 18:26:58.466567 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.466644 kubelet[3386]: E0620 18:26:58.466588 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.466838 kubelet[3386]: E0620 18:26:58.466828 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.467093 kubelet[3386]: W0620 18:26:58.466904 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.467093 kubelet[3386]: E0620 18:26:58.466923 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.467352 kubelet[3386]: E0620 18:26:58.467340 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.467409 kubelet[3386]: W0620 18:26:58.467401 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.467504 kubelet[3386]: E0620 18:26:58.467448 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.467706 kubelet[3386]: E0620 18:26:58.467669 3386 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:26:58.467706 kubelet[3386]: W0620 18:26:58.467679 3386 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:26:58.467706 kubelet[3386]: E0620 18:26:58.467689 3386 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:26:58.779673 containerd[1891]: time="2025-06-20T18:26:58.778630381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:58.781844 containerd[1891]: time="2025-06-20T18:26:58.781807523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4264319" Jun 20 18:26:58.787673 containerd[1891]: time="2025-06-20T18:26:58.787629351Z" level=info msg="ImageCreate event name:\"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:58.791273 containerd[1891]: time="2025-06-20T18:26:58.791219510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:58.791928 containerd[1891]: time="2025-06-20T18:26:58.791893627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5633520\" in 1.315026002s" Jun 20 18:26:58.791928 containerd[1891]: time="2025-06-20T18:26:58.791923572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\"" Jun 20 18:26:58.794653 containerd[1891]: time="2025-06-20T18:26:58.794632849Z" level=info msg="CreateContainer within sandbox \"cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 18:26:58.830540 containerd[1891]: time="2025-06-20T18:26:58.830510544Z" level=info msg="Container 63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:26:58.835064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737048855.mount: Deactivated successfully. Jun 20 18:26:58.851534 containerd[1891]: time="2025-06-20T18:26:58.851459118Z" level=info msg="CreateContainer within sandbox \"cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449\"" Jun 20 18:26:58.851785 containerd[1891]: time="2025-06-20T18:26:58.851763492Z" level=info msg="StartContainer for \"63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449\"" Jun 20 18:26:58.852738 containerd[1891]: time="2025-06-20T18:26:58.852714887Z" level=info msg="connecting to shim 63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449" address="unix:///run/containerd/s/096730e746df6cb5344ece2d99c852c69ffa0d2e5d18179e0c32058380de8117" protocol=ttrpc version=3 Jun 20 18:26:58.870462 systemd[1]: Started cri-containerd-63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449.scope - libcontainer container 63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449. Jun 20 18:26:58.898544 containerd[1891]: time="2025-06-20T18:26:58.898474249Z" level=info msg="StartContainer for \"63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449\" returns successfully" Jun 20 18:26:58.902381 systemd[1]: cri-containerd-63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449.scope: Deactivated successfully. Jun 20 18:26:58.905296 containerd[1891]: time="2025-06-20T18:26:58.905222295Z" level=info msg="received exit event container_id:\"63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449\" id:\"63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449\" pid:4004 exited_at:{seconds:1750444018 nanos:904881104}" Jun 20 18:26:58.905296 containerd[1891]: time="2025-06-20T18:26:58.905259223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449\" id:\"63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449\" pid:4004 exited_at:{seconds:1750444018 nanos:904881104}" Jun 20 18:26:58.919170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63adab207ba77c3ac311513bbc9b3ccdd631152c4d0e85c85ce003f56f8cf449-rootfs.mount: Deactivated successfully. Jun 20 18:26:59.411252 kubelet[3386]: I0620 18:26:59.411215 3386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:27:00.334631 kubelet[3386]: E0620 18:27:00.334547 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh6nt" podUID="d87a4850-5e3c-4d66-a5fc-1cb820fe465f" Jun 20 18:27:00.415767 containerd[1891]: time="2025-06-20T18:27:00.415615804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 18:27:02.335493 kubelet[3386]: E0620 18:27:02.335354 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh6nt" podUID="d87a4850-5e3c-4d66-a5fc-1cb820fe465f" Jun 20 18:27:02.692811 containerd[1891]: time="2025-06-20T18:27:02.692707212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:02.697565 containerd[1891]: time="2025-06-20T18:27:02.697529379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=65872909" Jun 20 18:27:02.700494 containerd[1891]: time="2025-06-20T18:27:02.700457372Z" level=info msg="ImageCreate event name:\"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:02.704410 containerd[1891]: time="2025-06-20T18:27:02.704372578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:02.704931 containerd[1891]: time="2025-06-20T18:27:02.704643831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"67242150\" in 2.288995539s" Jun 20 18:27:02.704931 containerd[1891]: time="2025-06-20T18:27:02.704669431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\"" Jun 20 18:27:02.707180 containerd[1891]: time="2025-06-20T18:27:02.707154881Z" level=info msg="CreateContainer within sandbox \"cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 18:27:02.739296 containerd[1891]: time="2025-06-20T18:27:02.739270873Z" level=info msg="Container 62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:02.758271 containerd[1891]: time="2025-06-20T18:27:02.758243159Z" level=info msg="CreateContainer within sandbox \"cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575\"" Jun 20 18:27:02.758741 containerd[1891]: time="2025-06-20T18:27:02.758639319Z" level=info msg="StartContainer for \"62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575\"" Jun 20 18:27:02.759649 containerd[1891]: time="2025-06-20T18:27:02.759618578Z" level=info msg="connecting to shim 62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575" address="unix:///run/containerd/s/096730e746df6cb5344ece2d99c852c69ffa0d2e5d18179e0c32058380de8117" protocol=ttrpc version=3 Jun 20 18:27:02.784176 systemd[1]: Started cri-containerd-62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575.scope - libcontainer container 62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575. Jun 20 18:27:02.812368 containerd[1891]: time="2025-06-20T18:27:02.812346889Z" level=info msg="StartContainer for \"62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575\" returns successfully" Jun 20 18:27:04.029914 containerd[1891]: time="2025-06-20T18:27:04.029866138Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:27:04.033953 systemd[1]: cri-containerd-62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575.scope: Deactivated successfully. Jun 20 18:27:04.034562 systemd[1]: cri-containerd-62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575.scope: Consumed 297ms CPU time, 189.2M memory peak, 165.8M written to disk. Jun 20 18:27:04.034899 containerd[1891]: time="2025-06-20T18:27:04.034856988Z" level=info msg="received exit event container_id:\"62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575\" id:\"62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575\" pid:4065 exited_at:{seconds:1750444024 nanos:33731574}" Jun 20 18:27:04.035377 containerd[1891]: time="2025-06-20T18:27:04.035344822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575\" id:\"62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575\" pid:4065 exited_at:{seconds:1750444024 nanos:33731574}" Jun 20 18:27:04.050030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62d1cbfa3248860fcaf51f2320c60568a95c2908d9d291388c01863a5083b575-rootfs.mount: Deactivated successfully. Jun 20 18:27:04.083079 kubelet[3386]: I0620 18:27:04.083041 3386 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 18:27:04.434715 kubelet[3386]: W0620 18:27:04.141663 3386 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4344.1.0-a-c937e4b650" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object Jun 20 18:27:04.434715 kubelet[3386]: E0620 18:27:04.141697 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4344.1.0-a-c937e4b650\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object" logger="UnhandledError" Jun 20 18:27:04.434715 kubelet[3386]: W0620 18:27:04.142216 3386 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4344.1.0-a-c937e4b650" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object Jun 20 18:27:04.434715 kubelet[3386]: E0620 18:27:04.142293 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4344.1.0-a-c937e4b650\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object" logger="UnhandledError" Jun 20 18:27:04.434715 kubelet[3386]: W0620 18:27:04.142438 3386 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4344.1.0-a-c937e4b650" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object Jun 20 18:27:04.129328 systemd[1]: Created slice kubepods-burstable-pod223ca0a8_aaba_42fb_83b2_4eeb9dd6fc80.slice - libcontainer container kubepods-burstable-pod223ca0a8_aaba_42fb_83b2_4eeb9dd6fc80.slice. Jun 20 18:27:04.435453 kubelet[3386]: E0620 18:27:04.142455 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4344.1.0-a-c937e4b650\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object" logger="UnhandledError" Jun 20 18:27:04.435453 kubelet[3386]: W0620 18:27:04.142700 3386 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4344.1.0-a-c937e4b650" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object Jun 20 18:27:04.435453 kubelet[3386]: E0620 18:27:04.142723 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4344.1.0-a-c937e4b650\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object" logger="UnhandledError" Jun 20 18:27:04.435453 kubelet[3386]: W0620 18:27:04.143150 3386 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4344.1.0-a-c937e4b650" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object Jun 20 18:27:04.143343 systemd[1]: Created slice kubepods-besteffort-pod7f743136_252c_4fc8_8349_f6f235c545b3.slice - libcontainer container kubepods-besteffort-pod7f743136_252c_4fc8_8349_f6f235c545b3.slice. Jun 20 18:27:04.435740 kubelet[3386]: E0620 18:27:04.143175 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4344.1.0-a-c937e4b650\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object" logger="UnhandledError" Jun 20 18:27:04.435740 kubelet[3386]: W0620 18:27:04.143200 3386 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4344.1.0-a-c937e4b650" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object Jun 20 18:27:04.435740 kubelet[3386]: E0620 18:27:04.143208 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4344.1.0-a-c937e4b650\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object" logger="UnhandledError" Jun 20 18:27:04.435740 kubelet[3386]: W0620 18:27:04.143225 3386 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4344.1.0-a-c937e4b650" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object Jun 20 18:27:04.149382 systemd[1]: Created slice kubepods-besteffort-pod2c9fa1bb_fb94_4eeb_a246_510a247052b1.slice - libcontainer container kubepods-besteffort-pod2c9fa1bb_fb94_4eeb_a246_510a247052b1.slice. Jun 20 18:27:04.435864 kubelet[3386]: E0620 18:27:04.143236 3386 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4344.1.0-a-c937e4b650\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4344.1.0-a-c937e4b650' and this object" logger="UnhandledError" Jun 20 18:27:04.435864 kubelet[3386]: I0620 18:27:04.196703 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxppd\" (UniqueName: \"kubernetes.io/projected/762b512e-7ebf-40d7-a3fb-fa664d1bb2bc-kube-api-access-sxppd\") pod \"coredns-7c65d6cfc9-t4bch\" (UID: \"762b512e-7ebf-40d7-a3fb-fa664d1bb2bc\") " pod="kube-system/coredns-7c65d6cfc9-t4bch" Jun 20 18:27:04.435864 kubelet[3386]: I0620 18:27:04.196749 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjc2\" (UniqueName: \"kubernetes.io/projected/2c9fa1bb-fb94-4eeb-a246-510a247052b1-kube-api-access-bfjc2\") pod \"calico-kube-controllers-76b8dd894f-plbdq\" (UID: \"2c9fa1bb-fb94-4eeb-a246-510a247052b1\") " pod="calico-system/calico-kube-controllers-76b8dd894f-plbdq" Jun 20 18:27:04.435864 kubelet[3386]: I0620 18:27:04.196762 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f743136-252c-4fc8-8349-f6f235c545b3-config\") pod \"goldmane-dc7b455cb-86p6k\" (UID: \"7f743136-252c-4fc8-8349-f6f235c545b3\") " pod="calico-system/goldmane-dc7b455cb-86p6k" Jun 20 18:27:04.435864 kubelet[3386]: I0620 18:27:04.196775 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/762b512e-7ebf-40d7-a3fb-fa664d1bb2bc-config-volume\") pod \"coredns-7c65d6cfc9-t4bch\" (UID: \"762b512e-7ebf-40d7-a3fb-fa664d1bb2bc\") " pod="kube-system/coredns-7c65d6cfc9-t4bch" Jun 20 18:27:04.154935 systemd[1]: Created slice kubepods-besteffort-poddc05d7a5_d79d_4901_9734_e49ba246cf63.slice - libcontainer container kubepods-besteffort-poddc05d7a5_d79d_4901_9734_e49ba246cf63.slice. Jun 20 18:27:04.436018 kubelet[3386]: I0620 18:27:04.196798 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxlx\" (UniqueName: \"kubernetes.io/projected/dc05d7a5-d79d-4901-9734-e49ba246cf63-kube-api-access-2bxlx\") pod \"calico-apiserver-68bf6fb6d-c75mx\" (UID: \"dc05d7a5-d79d-4901-9734-e49ba246cf63\") " pod="calico-apiserver/calico-apiserver-68bf6fb6d-c75mx" Jun 20 18:27:04.436018 kubelet[3386]: I0620 18:27:04.196810 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxfw2\" (UniqueName: \"kubernetes.io/projected/83c44774-a76a-4f8b-9061-9919c8d09dde-kube-api-access-rxfw2\") pod \"calico-apiserver-68bf6fb6d-r7clf\" (UID: \"83c44774-a76a-4f8b-9061-9919c8d09dde\") " pod="calico-apiserver/calico-apiserver-68bf6fb6d-r7clf" Jun 20 18:27:04.436018 kubelet[3386]: I0620 18:27:04.196819 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/364325ca-57cf-4846-ab47-2d5582af4009-whisker-backend-key-pair\") pod \"whisker-6f9c78985d-gzdjm\" (UID: \"364325ca-57cf-4846-ab47-2d5582af4009\") " pod="calico-system/whisker-6f9c78985d-gzdjm" Jun 20 18:27:04.436018 kubelet[3386]: I0620 18:27:04.196830 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7f743136-252c-4fc8-8349-f6f235c545b3-goldmane-key-pair\") pod \"goldmane-dc7b455cb-86p6k\" (UID: \"7f743136-252c-4fc8-8349-f6f235c545b3\") " pod="calico-system/goldmane-dc7b455cb-86p6k" Jun 20 18:27:04.436018 kubelet[3386]: I0620 18:27:04.196841 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fs8q\" (UniqueName: \"kubernetes.io/projected/223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80-kube-api-access-4fs8q\") pod \"coredns-7c65d6cfc9-hsf8m\" (UID: \"223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80\") " pod="kube-system/coredns-7c65d6cfc9-hsf8m" Jun 20 18:27:04.160964 systemd[1]: Created slice kubepods-burstable-pod762b512e_7ebf_40d7_a3fb_fa664d1bb2bc.slice - libcontainer container kubepods-burstable-pod762b512e_7ebf_40d7_a3fb_fa664d1bb2bc.slice. Jun 20 18:27:04.436163 kubelet[3386]: I0620 18:27:04.196873 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f743136-252c-4fc8-8349-f6f235c545b3-goldmane-ca-bundle\") pod \"goldmane-dc7b455cb-86p6k\" (UID: \"7f743136-252c-4fc8-8349-f6f235c545b3\") " pod="calico-system/goldmane-dc7b455cb-86p6k" Jun 20 18:27:04.436163 kubelet[3386]: I0620 18:27:04.196882 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgrc\" (UniqueName: \"kubernetes.io/projected/7f743136-252c-4fc8-8349-f6f235c545b3-kube-api-access-4dgrc\") pod \"goldmane-dc7b455cb-86p6k\" (UID: \"7f743136-252c-4fc8-8349-f6f235c545b3\") " pod="calico-system/goldmane-dc7b455cb-86p6k" Jun 20 18:27:04.436163 kubelet[3386]: I0620 18:27:04.196892 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80-config-volume\") pod \"coredns-7c65d6cfc9-hsf8m\" (UID: \"223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80\") " pod="kube-system/coredns-7c65d6cfc9-hsf8m" Jun 20 18:27:04.436163 kubelet[3386]: I0620 18:27:04.196958 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc05d7a5-d79d-4901-9734-e49ba246cf63-calico-apiserver-certs\") pod \"calico-apiserver-68bf6fb6d-c75mx\" (UID: \"dc05d7a5-d79d-4901-9734-e49ba246cf63\") " pod="calico-apiserver/calico-apiserver-68bf6fb6d-c75mx" Jun 20 18:27:04.436163 kubelet[3386]: I0620 18:27:04.196970 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/364325ca-57cf-4846-ab47-2d5582af4009-whisker-ca-bundle\") pod \"whisker-6f9c78985d-gzdjm\" (UID: \"364325ca-57cf-4846-ab47-2d5582af4009\") " pod="calico-system/whisker-6f9c78985d-gzdjm" Jun 20 18:27:04.166267 systemd[1]: Created slice kubepods-besteffort-pod83c44774_a76a_4f8b_9061_9919c8d09dde.slice - libcontainer container kubepods-besteffort-pod83c44774_a76a_4f8b_9061_9919c8d09dde.slice. Jun 20 18:27:04.436327 kubelet[3386]: I0620 18:27:04.197001 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/83c44774-a76a-4f8b-9061-9919c8d09dde-calico-apiserver-certs\") pod \"calico-apiserver-68bf6fb6d-r7clf\" (UID: \"83c44774-a76a-4f8b-9061-9919c8d09dde\") " pod="calico-apiserver/calico-apiserver-68bf6fb6d-r7clf" Jun 20 18:27:04.436327 kubelet[3386]: I0620 18:27:04.197011 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttgx\" (UniqueName: \"kubernetes.io/projected/364325ca-57cf-4846-ab47-2d5582af4009-kube-api-access-2ttgx\") pod \"whisker-6f9c78985d-gzdjm\" (UID: \"364325ca-57cf-4846-ab47-2d5582af4009\") " pod="calico-system/whisker-6f9c78985d-gzdjm" Jun 20 18:27:04.436327 kubelet[3386]: I0620 18:27:04.197020 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c9fa1bb-fb94-4eeb-a246-510a247052b1-tigera-ca-bundle\") pod \"calico-kube-controllers-76b8dd894f-plbdq\" (UID: \"2c9fa1bb-fb94-4eeb-a246-510a247052b1\") " pod="calico-system/calico-kube-controllers-76b8dd894f-plbdq" Jun 20 18:27:04.171678 systemd[1]: Created slice kubepods-besteffort-pod364325ca_57cf_4846_ab47_2d5582af4009.slice - libcontainer container kubepods-besteffort-pod364325ca_57cf_4846_ab47_2d5582af4009.slice. Jun 20 18:27:04.339250 systemd[1]: Created slice kubepods-besteffort-podd87a4850_5e3c_4d66_a5fc_1cb820fe465f.slice - libcontainer container kubepods-besteffort-podd87a4850_5e3c_4d66_a5fc_1cb820fe465f.slice. Jun 20 18:27:04.438160 containerd[1891]: time="2025-06-20T18:27:04.438130877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh6nt,Uid:d87a4850-5e3c-4d66-a5fc-1cb820fe465f,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:04.736693 containerd[1891]: time="2025-06-20T18:27:04.736511211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hsf8m,Uid:223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:04.742363 containerd[1891]: time="2025-06-20T18:27:04.742305685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t4bch,Uid:762b512e-7ebf-40d7-a3fb-fa664d1bb2bc,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:04.743898 containerd[1891]: time="2025-06-20T18:27:04.743840004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b8dd894f-plbdq,Uid:2c9fa1bb-fb94-4eeb-a246-510a247052b1,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:05.023801 containerd[1891]: time="2025-06-20T18:27:05.023640732Z" level=error msg="Failed to destroy network for sandbox \"a538d9f8301bded01e5d8d07936c66816a909c9ba88d8f7288e7a4471cd5f521\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.032809 containerd[1891]: time="2025-06-20T18:27:05.032729407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh6nt,Uid:d87a4850-5e3c-4d66-a5fc-1cb820fe465f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a538d9f8301bded01e5d8d07936c66816a909c9ba88d8f7288e7a4471cd5f521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.033337 kubelet[3386]: E0620 18:27:05.033298 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a538d9f8301bded01e5d8d07936c66816a909c9ba88d8f7288e7a4471cd5f521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.033537 kubelet[3386]: E0620 18:27:05.033489 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a538d9f8301bded01e5d8d07936c66816a909c9ba88d8f7288e7a4471cd5f521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zh6nt" Jun 20 18:27:05.033537 kubelet[3386]: E0620 18:27:05.033510 3386 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a538d9f8301bded01e5d8d07936c66816a909c9ba88d8f7288e7a4471cd5f521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zh6nt" Jun 20 18:27:05.033969 kubelet[3386]: E0620 18:27:05.033742 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zh6nt_calico-system(d87a4850-5e3c-4d66-a5fc-1cb820fe465f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zh6nt_calico-system(d87a4850-5e3c-4d66-a5fc-1cb820fe465f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a538d9f8301bded01e5d8d07936c66816a909c9ba88d8f7288e7a4471cd5f521\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zh6nt" podUID="d87a4850-5e3c-4d66-a5fc-1cb820fe465f" Jun 20 18:27:05.043284 containerd[1891]: time="2025-06-20T18:27:05.043249838Z" level=error msg="Failed to destroy network for sandbox \"adbbef387ef636f9ff03881f83554f8e8f82a5ef9a1e061ab070d6fde5e711ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.052140 containerd[1891]: time="2025-06-20T18:27:05.052106237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hsf8m,Uid:223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbbef387ef636f9ff03881f83554f8e8f82a5ef9a1e061ab070d6fde5e711ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.053850 kubelet[3386]: E0620 18:27:05.052258 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbbef387ef636f9ff03881f83554f8e8f82a5ef9a1e061ab070d6fde5e711ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.053850 kubelet[3386]: E0620 18:27:05.052292 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbbef387ef636f9ff03881f83554f8e8f82a5ef9a1e061ab070d6fde5e711ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hsf8m" Jun 20 18:27:05.053850 kubelet[3386]: E0620 18:27:05.052303 3386 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adbbef387ef636f9ff03881f83554f8e8f82a5ef9a1e061ab070d6fde5e711ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hsf8m" Jun 20 18:27:05.053928 kubelet[3386]: E0620 18:27:05.052333 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hsf8m_kube-system(223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hsf8m_kube-system(223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adbbef387ef636f9ff03881f83554f8e8f82a5ef9a1e061ab070d6fde5e711ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hsf8m" podUID="223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80" Jun 20 18:27:05.055117 containerd[1891]: time="2025-06-20T18:27:05.054192182Z" level=error msg="Failed to destroy network for sandbox \"58001c94e340ae1e2d6542e57ecbacd3182ebc9b008dd95a03f7a417f6ece89d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.056648 systemd[1]: run-netns-cni\x2d9e2b8946\x2d31cb\x2d6c50\x2db3ed\x2d63384f5281ec.mount: Deactivated successfully. Jun 20 18:27:05.060997 containerd[1891]: time="2025-06-20T18:27:05.060952523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t4bch,Uid:762b512e-7ebf-40d7-a3fb-fa664d1bb2bc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58001c94e340ae1e2d6542e57ecbacd3182ebc9b008dd95a03f7a417f6ece89d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.061458 kubelet[3386]: E0620 18:27:05.061425 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58001c94e340ae1e2d6542e57ecbacd3182ebc9b008dd95a03f7a417f6ece89d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.061612 kubelet[3386]: E0620 18:27:05.061463 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58001c94e340ae1e2d6542e57ecbacd3182ebc9b008dd95a03f7a417f6ece89d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t4bch" Jun 20 18:27:05.061612 kubelet[3386]: E0620 18:27:05.061498 3386 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58001c94e340ae1e2d6542e57ecbacd3182ebc9b008dd95a03f7a417f6ece89d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t4bch" Jun 20 18:27:05.061612 kubelet[3386]: E0620 18:27:05.061530 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-t4bch_kube-system(762b512e-7ebf-40d7-a3fb-fa664d1bb2bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-t4bch_kube-system(762b512e-7ebf-40d7-a3fb-fa664d1bb2bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58001c94e340ae1e2d6542e57ecbacd3182ebc9b008dd95a03f7a417f6ece89d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t4bch" podUID="762b512e-7ebf-40d7-a3fb-fa664d1bb2bc" Jun 20 18:27:05.069882 containerd[1891]: time="2025-06-20T18:27:05.069749736Z" level=error msg="Failed to destroy network for sandbox \"9d2e90d33bc9898fe635d66ff0f6af6cbb14ab8caef8070fcb0f116e3bddabe8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.071597 systemd[1]: run-netns-cni\x2dc2c89cd1\x2d4e97\x2decf0\x2df450\x2d597574b040c9.mount: Deactivated successfully. Jun 20 18:27:05.075189 containerd[1891]: time="2025-06-20T18:27:05.075152827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b8dd894f-plbdq,Uid:2c9fa1bb-fb94-4eeb-a246-510a247052b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2e90d33bc9898fe635d66ff0f6af6cbb14ab8caef8070fcb0f116e3bddabe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.075422 kubelet[3386]: E0620 18:27:05.075403 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2e90d33bc9898fe635d66ff0f6af6cbb14ab8caef8070fcb0f116e3bddabe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:05.075691 kubelet[3386]: E0620 18:27:05.075509 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2e90d33bc9898fe635d66ff0f6af6cbb14ab8caef8070fcb0f116e3bddabe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76b8dd894f-plbdq" Jun 20 18:27:05.075691 kubelet[3386]: E0620 18:27:05.075532 3386 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d2e90d33bc9898fe635d66ff0f6af6cbb14ab8caef8070fcb0f116e3bddabe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76b8dd894f-plbdq" Jun 20 18:27:05.075691 kubelet[3386]: E0620 18:27:05.075557 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76b8dd894f-plbdq_calico-system(2c9fa1bb-fb94-4eeb-a246-510a247052b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76b8dd894f-plbdq_calico-system(2c9fa1bb-fb94-4eeb-a246-510a247052b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d2e90d33bc9898fe635d66ff0f6af6cbb14ab8caef8070fcb0f116e3bddabe8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76b8dd894f-plbdq" podUID="2c9fa1bb-fb94-4eeb-a246-510a247052b1" Jun 20 18:27:05.299571 kubelet[3386]: E0620 18:27:05.299297 3386 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jun 20 18:27:05.299571 kubelet[3386]: E0620 18:27:05.299335 3386 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.299571 kubelet[3386]: E0620 18:27:05.299365 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83c44774-a76a-4f8b-9061-9919c8d09dde-calico-apiserver-certs podName:83c44774-a76a-4f8b-9061-9919c8d09dde nodeName:}" failed. No retries permitted until 2025-06-20 18:27:05.79934742 +0000 UTC m=+30.538964570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/83c44774-a76a-4f8b-9061-9919c8d09dde-calico-apiserver-certs") pod "calico-apiserver-68bf6fb6d-r7clf" (UID: "83c44774-a76a-4f8b-9061-9919c8d09dde") : failed to sync secret cache: timed out waiting for the condition Jun 20 18:27:05.299571 kubelet[3386]: E0620 18:27:05.299385 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7f743136-252c-4fc8-8349-f6f235c545b3-config podName:7f743136-252c-4fc8-8349-f6f235c545b3 nodeName:}" failed. No retries permitted until 2025-06-20 18:27:05.799374245 +0000 UTC m=+30.538991387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7f743136-252c-4fc8-8349-f6f235c545b3-config") pod "goldmane-dc7b455cb-86p6k" (UID: "7f743136-252c-4fc8-8349-f6f235c545b3") : failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.299571 kubelet[3386]: E0620 18:27:05.299399 3386 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.299974 kubelet[3386]: E0620 18:27:05.299416 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7f743136-252c-4fc8-8349-f6f235c545b3-goldmane-ca-bundle podName:7f743136-252c-4fc8-8349-f6f235c545b3 nodeName:}" failed. No retries permitted until 2025-06-20 18:27:05.799411494 +0000 UTC m=+30.539028636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/7f743136-252c-4fc8-8349-f6f235c545b3-goldmane-ca-bundle") pod "goldmane-dc7b455cb-86p6k" (UID: "7f743136-252c-4fc8-8349-f6f235c545b3") : failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.300226 kubelet[3386]: E0620 18:27:05.300136 3386 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jun 20 18:27:05.300226 kubelet[3386]: E0620 18:27:05.300153 3386 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jun 20 18:27:05.300226 kubelet[3386]: E0620 18:27:05.300173 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f743136-252c-4fc8-8349-f6f235c545b3-goldmane-key-pair podName:7f743136-252c-4fc8-8349-f6f235c545b3 nodeName:}" failed. No retries permitted until 2025-06-20 18:27:05.800164596 +0000 UTC m=+30.539781738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/7f743136-252c-4fc8-8349-f6f235c545b3-goldmane-key-pair") pod "goldmane-dc7b455cb-86p6k" (UID: "7f743136-252c-4fc8-8349-f6f235c545b3") : failed to sync secret cache: timed out waiting for the condition Jun 20 18:27:05.300226 kubelet[3386]: E0620 18:27:05.300191 3386 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jun 20 18:27:05.300226 kubelet[3386]: E0620 18:27:05.300192 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc05d7a5-d79d-4901-9734-e49ba246cf63-calico-apiserver-certs podName:dc05d7a5-d79d-4901-9734-e49ba246cf63 nodeName:}" failed. No retries permitted until 2025-06-20 18:27:05.800183637 +0000 UTC m=+30.539800779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dc05d7a5-d79d-4901-9734-e49ba246cf63-calico-apiserver-certs") pod "calico-apiserver-68bf6fb6d-c75mx" (UID: "dc05d7a5-d79d-4901-9734-e49ba246cf63") : failed to sync secret cache: timed out waiting for the condition Jun 20 18:27:05.300372 kubelet[3386]: E0620 18:27:05.300214 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/364325ca-57cf-4846-ab47-2d5582af4009-whisker-backend-key-pair podName:364325ca-57cf-4846-ab47-2d5582af4009 nodeName:}" failed. No retries permitted until 2025-06-20 18:27:05.800210333 +0000 UTC m=+30.539827475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/364325ca-57cf-4846-ab47-2d5582af4009-whisker-backend-key-pair") pod "whisker-6f9c78985d-gzdjm" (UID: "364325ca-57cf-4846-ab47-2d5582af4009") : failed to sync secret cache: timed out waiting for the condition Jun 20 18:27:05.307963 kubelet[3386]: E0620 18:27:05.307922 3386 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.307963 kubelet[3386]: E0620 18:27:05.307948 3386 projected.go:194] Error preparing data for projected volume kube-api-access-2bxlx for pod calico-apiserver/calico-apiserver-68bf6fb6d-c75mx: failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.308064 kubelet[3386]: E0620 18:27:05.307985 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dc05d7a5-d79d-4901-9734-e49ba246cf63-kube-api-access-2bxlx podName:dc05d7a5-d79d-4901-9734-e49ba246cf63 nodeName:}" failed. No retries permitted until 2025-06-20 18:27:05.807975966 +0000 UTC m=+30.547593108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2bxlx" (UniqueName: "kubernetes.io/projected/dc05d7a5-d79d-4901-9734-e49ba246cf63-kube-api-access-2bxlx") pod "calico-apiserver-68bf6fb6d-c75mx" (UID: "dc05d7a5-d79d-4901-9734-e49ba246cf63") : failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.308064 kubelet[3386]: E0620 18:27:05.307934 3386 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.308064 kubelet[3386]: E0620 18:27:05.308000 3386 projected.go:194] Error preparing data for projected volume kube-api-access-rxfw2 for pod calico-apiserver/calico-apiserver-68bf6fb6d-r7clf: failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.308064 kubelet[3386]: E0620 18:27:05.308013 3386 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/83c44774-a76a-4f8b-9061-9919c8d09dde-kube-api-access-rxfw2 podName:83c44774-a76a-4f8b-9061-9919c8d09dde nodeName:}" failed. No retries permitted until 2025-06-20 18:27:05.808008759 +0000 UTC m=+30.547625901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxfw2" (UniqueName: "kubernetes.io/projected/83c44774-a76a-4f8b-9061-9919c8d09dde-kube-api-access-rxfw2") pod "calico-apiserver-68bf6fb6d-r7clf" (UID: "83c44774-a76a-4f8b-9061-9919c8d09dde") : failed to sync configmap cache: timed out waiting for the condition Jun 20 18:27:05.432405 containerd[1891]: time="2025-06-20T18:27:05.431678107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 18:27:05.942703 containerd[1891]: time="2025-06-20T18:27:05.942362656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-86p6k,Uid:7f743136-252c-4fc8-8349-f6f235c545b3,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:05.943402 containerd[1891]: time="2025-06-20T18:27:05.942612045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9c78985d-gzdjm,Uid:364325ca-57cf-4846-ab47-2d5582af4009,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:05.945023 containerd[1891]: time="2025-06-20T18:27:05.944503194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bf6fb6d-r7clf,Uid:83c44774-a76a-4f8b-9061-9919c8d09dde,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:05.945233 containerd[1891]: time="2025-06-20T18:27:05.945185183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bf6fb6d-c75mx,Uid:dc05d7a5-d79d-4901-9734-e49ba246cf63,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:06.011380 containerd[1891]: time="2025-06-20T18:27:06.011282165Z" level=error msg="Failed to destroy network for sandbox \"c33b1fef2245397e8da7a41ea928547020b85c16cc4b51d2d3de5383168ad720\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.016890 containerd[1891]: time="2025-06-20T18:27:06.016658799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-86p6k,Uid:7f743136-252c-4fc8-8349-f6f235c545b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33b1fef2245397e8da7a41ea928547020b85c16cc4b51d2d3de5383168ad720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.017478 kubelet[3386]: E0620 18:27:06.017414 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33b1fef2245397e8da7a41ea928547020b85c16cc4b51d2d3de5383168ad720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.017854 kubelet[3386]: E0620 18:27:06.017590 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33b1fef2245397e8da7a41ea928547020b85c16cc4b51d2d3de5383168ad720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-86p6k" Jun 20 18:27:06.017854 kubelet[3386]: E0620 18:27:06.017613 3386 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c33b1fef2245397e8da7a41ea928547020b85c16cc4b51d2d3de5383168ad720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-86p6k" Jun 20 18:27:06.017854 kubelet[3386]: E0620 18:27:06.017654 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-dc7b455cb-86p6k_calico-system(7f743136-252c-4fc8-8349-f6f235c545b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-dc7b455cb-86p6k_calico-system(7f743136-252c-4fc8-8349-f6f235c545b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c33b1fef2245397e8da7a41ea928547020b85c16cc4b51d2d3de5383168ad720\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-dc7b455cb-86p6k" podUID="7f743136-252c-4fc8-8349-f6f235c545b3" Jun 20 18:27:06.028767 containerd[1891]: time="2025-06-20T18:27:06.028732285Z" level=error msg="Failed to destroy network for sandbox \"9b49b6ec48e51001b5198a569c9755b108fee9ff84c3010fe8f58aeb88e4ffcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.033995 containerd[1891]: time="2025-06-20T18:27:06.033761288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9c78985d-gzdjm,Uid:364325ca-57cf-4846-ab47-2d5582af4009,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b49b6ec48e51001b5198a569c9755b108fee9ff84c3010fe8f58aeb88e4ffcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.034402 kubelet[3386]: E0620 18:27:06.034289 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b49b6ec48e51001b5198a569c9755b108fee9ff84c3010fe8f58aeb88e4ffcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.034469 kubelet[3386]: E0620 18:27:06.034418 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b49b6ec48e51001b5198a569c9755b108fee9ff84c3010fe8f58aeb88e4ffcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f9c78985d-gzdjm" Jun 20 18:27:06.034469 kubelet[3386]: E0620 18:27:06.034433 3386 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b49b6ec48e51001b5198a569c9755b108fee9ff84c3010fe8f58aeb88e4ffcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f9c78985d-gzdjm" Jun 20 18:27:06.034508 kubelet[3386]: E0620 18:27:06.034463 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f9c78985d-gzdjm_calico-system(364325ca-57cf-4846-ab47-2d5582af4009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f9c78985d-gzdjm_calico-system(364325ca-57cf-4846-ab47-2d5582af4009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b49b6ec48e51001b5198a569c9755b108fee9ff84c3010fe8f58aeb88e4ffcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f9c78985d-gzdjm" podUID="364325ca-57cf-4846-ab47-2d5582af4009" Jun 20 18:27:06.044806 containerd[1891]: time="2025-06-20T18:27:06.044780049Z" level=error msg="Failed to destroy network for sandbox \"4fba305b5d7f4e231cae68b23d8ec6fafb5d32e9b5e5c3d337e1489c1a26bb33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.053210 containerd[1891]: time="2025-06-20T18:27:06.053129654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bf6fb6d-r7clf,Uid:83c44774-a76a-4f8b-9061-9919c8d09dde,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fba305b5d7f4e231cae68b23d8ec6fafb5d32e9b5e5c3d337e1489c1a26bb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.053488 kubelet[3386]: E0620 18:27:06.053293 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fba305b5d7f4e231cae68b23d8ec6fafb5d32e9b5e5c3d337e1489c1a26bb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.053488 kubelet[3386]: E0620 18:27:06.053336 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fba305b5d7f4e231cae68b23d8ec6fafb5d32e9b5e5c3d337e1489c1a26bb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68bf6fb6d-r7clf" Jun 20 18:27:06.053488 kubelet[3386]: E0620 18:27:06.053351 3386 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fba305b5d7f4e231cae68b23d8ec6fafb5d32e9b5e5c3d337e1489c1a26bb33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68bf6fb6d-r7clf" Jun 20 18:27:06.053553 kubelet[3386]: E0620 18:27:06.053383 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68bf6fb6d-r7clf_calico-apiserver(83c44774-a76a-4f8b-9061-9919c8d09dde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68bf6fb6d-r7clf_calico-apiserver(83c44774-a76a-4f8b-9061-9919c8d09dde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fba305b5d7f4e231cae68b23d8ec6fafb5d32e9b5e5c3d337e1489c1a26bb33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68bf6fb6d-r7clf" podUID="83c44774-a76a-4f8b-9061-9919c8d09dde" Jun 20 18:27:06.059057 containerd[1891]: time="2025-06-20T18:27:06.059029802Z" level=error msg="Failed to destroy network for sandbox \"8289cbe00c4d5aa7642a60b719983742785107be8ae27a87088e4466f12bd7fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.060530 systemd[1]: run-netns-cni\x2d984170d7\x2df831\x2d0f31\x2d9129\x2d76e2adb4bb46.mount: Deactivated successfully. Jun 20 18:27:06.068035 containerd[1891]: time="2025-06-20T18:27:06.067951042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bf6fb6d-c75mx,Uid:dc05d7a5-d79d-4901-9734-e49ba246cf63,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8289cbe00c4d5aa7642a60b719983742785107be8ae27a87088e4466f12bd7fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.068128 kubelet[3386]: E0620 18:27:06.068110 3386 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8289cbe00c4d5aa7642a60b719983742785107be8ae27a87088e4466f12bd7fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:06.068181 kubelet[3386]: E0620 18:27:06.068141 3386 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8289cbe00c4d5aa7642a60b719983742785107be8ae27a87088e4466f12bd7fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68bf6fb6d-c75mx" Jun 20 18:27:06.068181 kubelet[3386]: E0620 18:27:06.068178 3386 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8289cbe00c4d5aa7642a60b719983742785107be8ae27a87088e4466f12bd7fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68bf6fb6d-c75mx" Jun 20 18:27:06.068250 kubelet[3386]: E0620 18:27:06.068205 3386 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68bf6fb6d-c75mx_calico-apiserver(dc05d7a5-d79d-4901-9734-e49ba246cf63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68bf6fb6d-c75mx_calico-apiserver(dc05d7a5-d79d-4901-9734-e49ba246cf63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8289cbe00c4d5aa7642a60b719983742785107be8ae27a87088e4466f12bd7fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68bf6fb6d-c75mx" podUID="dc05d7a5-d79d-4901-9734-e49ba246cf63" Jun 20 18:27:08.957323 kubelet[3386]: I0620 18:27:08.956960 3386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:27:09.324014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704116663.mount: Deactivated successfully. Jun 20 18:27:10.112059 containerd[1891]: time="2025-06-20T18:27:10.111703245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:10.115655 containerd[1891]: time="2025-06-20T18:27:10.115622384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=150542367" Jun 20 18:27:10.120082 containerd[1891]: time="2025-06-20T18:27:10.120031547Z" level=info msg="ImageCreate event name:\"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:10.125517 containerd[1891]: time="2025-06-20T18:27:10.125478842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:10.125987 containerd[1891]: time="2025-06-20T18:27:10.125720463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"150542229\" in 4.693157771s" Jun 20 18:27:10.125987 containerd[1891]: time="2025-06-20T18:27:10.125751223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\"" Jun 20 18:27:10.137750 containerd[1891]: time="2025-06-20T18:27:10.137722250Z" level=info msg="CreateContainer within sandbox \"cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 18:27:10.175780 containerd[1891]: time="2025-06-20T18:27:10.174720950Z" level=info msg="Container 4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:10.194183 containerd[1891]: time="2025-06-20T18:27:10.194148670Z" level=info msg="CreateContainer within sandbox \"cca852036a771eb2db1a9d4067ef8d654e6ae3f3228a3bd5096ea5aef8985491\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\"" Jun 20 18:27:10.195378 containerd[1891]: time="2025-06-20T18:27:10.195351565Z" level=info msg="StartContainer for \"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\"" Jun 20 18:27:10.196346 containerd[1891]: time="2025-06-20T18:27:10.196314951Z" level=info msg="connecting to shim 4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4" address="unix:///run/containerd/s/096730e746df6cb5344ece2d99c852c69ffa0d2e5d18179e0c32058380de8117" protocol=ttrpc version=3 Jun 20 18:27:10.214275 systemd[1]: Started cri-containerd-4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4.scope - libcontainer container 4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4. Jun 20 18:27:10.250089 containerd[1891]: time="2025-06-20T18:27:10.249063198Z" level=info msg="StartContainer for \"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\" returns successfully" Jun 20 18:27:10.459492 kubelet[3386]: I0620 18:27:10.459267 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xp6h8" podStartSLOduration=1.5191530229999999 podStartE2EDuration="16.459251515s" podCreationTimestamp="2025-06-20 18:26:54 +0000 UTC" firstStartedPulling="2025-06-20 18:26:55.186240574 +0000 UTC m=+19.925857716" lastFinishedPulling="2025-06-20 18:27:10.126339066 +0000 UTC m=+34.865956208" observedRunningTime="2025-06-20 18:27:10.458283064 +0000 UTC m=+35.197900206" watchObservedRunningTime="2025-06-20 18:27:10.459251515 +0000 UTC m=+35.198868673" Jun 20 18:27:10.481693 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 18:27:10.481767 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 18:27:10.638462 kubelet[3386]: I0620 18:27:10.638436 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/364325ca-57cf-4846-ab47-2d5582af4009-whisker-backend-key-pair\") pod \"364325ca-57cf-4846-ab47-2d5582af4009\" (UID: \"364325ca-57cf-4846-ab47-2d5582af4009\") " Jun 20 18:27:10.638835 kubelet[3386]: I0620 18:27:10.638816 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ttgx\" (UniqueName: \"kubernetes.io/projected/364325ca-57cf-4846-ab47-2d5582af4009-kube-api-access-2ttgx\") pod \"364325ca-57cf-4846-ab47-2d5582af4009\" (UID: \"364325ca-57cf-4846-ab47-2d5582af4009\") " Jun 20 18:27:10.639112 kubelet[3386]: I0620 18:27:10.638915 3386 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/364325ca-57cf-4846-ab47-2d5582af4009-whisker-ca-bundle\") pod \"364325ca-57cf-4846-ab47-2d5582af4009\" (UID: \"364325ca-57cf-4846-ab47-2d5582af4009\") " Jun 20 18:27:10.639584 kubelet[3386]: I0620 18:27:10.639508 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364325ca-57cf-4846-ab47-2d5582af4009-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "364325ca-57cf-4846-ab47-2d5582af4009" (UID: "364325ca-57cf-4846-ab47-2d5582af4009"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 18:27:10.641932 systemd[1]: var-lib-kubelet-pods-364325ca\x2d57cf\x2d4846\x2dab47\x2d2d5582af4009-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 18:27:10.643792 kubelet[3386]: I0620 18:27:10.643622 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/364325ca-57cf-4846-ab47-2d5582af4009-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "364325ca-57cf-4846-ab47-2d5582af4009" (UID: "364325ca-57cf-4846-ab47-2d5582af4009"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 18:27:10.645980 systemd[1]: var-lib-kubelet-pods-364325ca\x2d57cf\x2d4846\x2dab47\x2d2d5582af4009-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2ttgx.mount: Deactivated successfully. Jun 20 18:27:10.646272 kubelet[3386]: I0620 18:27:10.646252 3386 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364325ca-57cf-4846-ab47-2d5582af4009-kube-api-access-2ttgx" (OuterVolumeSpecName: "kube-api-access-2ttgx") pod "364325ca-57cf-4846-ab47-2d5582af4009" (UID: "364325ca-57cf-4846-ab47-2d5582af4009"). InnerVolumeSpecName "kube-api-access-2ttgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 18:27:10.740382 kubelet[3386]: I0620 18:27:10.740264 3386 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/364325ca-57cf-4846-ab47-2d5582af4009-whisker-backend-key-pair\") on node \"ci-4344.1.0-a-c937e4b650\" DevicePath \"\"" Jun 20 18:27:10.740382 kubelet[3386]: I0620 18:27:10.740290 3386 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ttgx\" (UniqueName: \"kubernetes.io/projected/364325ca-57cf-4846-ab47-2d5582af4009-kube-api-access-2ttgx\") on node \"ci-4344.1.0-a-c937e4b650\" DevicePath \"\"" Jun 20 18:27:10.740382 kubelet[3386]: I0620 18:27:10.740300 3386 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/364325ca-57cf-4846-ab47-2d5582af4009-whisker-ca-bundle\") on node \"ci-4344.1.0-a-c937e4b650\" DevicePath \"\"" Jun 20 18:27:11.342512 systemd[1]: Removed slice kubepods-besteffort-pod364325ca_57cf_4846_ab47_2d5582af4009.slice - libcontainer container kubepods-besteffort-pod364325ca_57cf_4846_ab47_2d5582af4009.slice. Jun 20 18:27:11.538590 systemd[1]: Created slice kubepods-besteffort-podd9af568d_40f3_4906_9ef0_a42942514bea.slice - libcontainer container kubepods-besteffort-podd9af568d_40f3_4906_9ef0_a42942514bea.slice. Jun 20 18:27:11.644321 kubelet[3386]: I0620 18:27:11.644191 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9gvg\" (UniqueName: \"kubernetes.io/projected/d9af568d-40f3-4906-9ef0-a42942514bea-kube-api-access-c9gvg\") pod \"whisker-5d99c85dcf-q7dd8\" (UID: \"d9af568d-40f3-4906-9ef0-a42942514bea\") " pod="calico-system/whisker-5d99c85dcf-q7dd8" Jun 20 18:27:11.644321 kubelet[3386]: I0620 18:27:11.644233 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9af568d-40f3-4906-9ef0-a42942514bea-whisker-ca-bundle\") pod \"whisker-5d99c85dcf-q7dd8\" (UID: \"d9af568d-40f3-4906-9ef0-a42942514bea\") " pod="calico-system/whisker-5d99c85dcf-q7dd8" Jun 20 18:27:11.644321 kubelet[3386]: I0620 18:27:11.644251 3386 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9af568d-40f3-4906-9ef0-a42942514bea-whisker-backend-key-pair\") pod \"whisker-5d99c85dcf-q7dd8\" (UID: \"d9af568d-40f3-4906-9ef0-a42942514bea\") " pod="calico-system/whisker-5d99c85dcf-q7dd8" Jun 20 18:27:11.843689 containerd[1891]: time="2025-06-20T18:27:11.843453167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d99c85dcf-q7dd8,Uid:d9af568d-40f3-4906-9ef0-a42942514bea,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:12.031521 systemd-networkd[1484]: cali53a62fbd1cd: Link UP Jun 20 18:27:12.032807 systemd-networkd[1484]: cali53a62fbd1cd: Gained carrier Jun 20 18:27:12.054019 containerd[1891]: 2025-06-20 18:27:11.878 [INFO][4489] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 18:27:12.054019 containerd[1891]: 2025-06-20 18:27:11.898 [INFO][4489] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0 whisker-5d99c85dcf- calico-system d9af568d-40f3-4906-9ef0-a42942514bea 871 0 2025-06-20 18:27:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d99c85dcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.0-a-c937e4b650 whisker-5d99c85dcf-q7dd8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali53a62fbd1cd [] [] }} ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Namespace="calico-system" Pod="whisker-5d99c85dcf-q7dd8" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-" Jun 20 18:27:12.054019 containerd[1891]: 2025-06-20 18:27:11.898 [INFO][4489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Namespace="calico-system" Pod="whisker-5d99c85dcf-q7dd8" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" Jun 20 18:27:12.054019 containerd[1891]: 2025-06-20 18:27:11.924 [INFO][4502] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" HandleID="k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Workload="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.924 [INFO][4502] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" HandleID="k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Workload="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-c937e4b650", "pod":"whisker-5d99c85dcf-q7dd8", "timestamp":"2025-06-20 18:27:11.924035989 +0000 UTC"}, Hostname:"ci-4344.1.0-a-c937e4b650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.924 [INFO][4502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.924 [INFO][4502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.924 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-c937e4b650' Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.930 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.935 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.939 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.940 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054185 containerd[1891]: 2025-06-20 18:27:11.942 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054325 containerd[1891]: 2025-06-20 18:27:11.942 [INFO][4502] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054325 containerd[1891]: 2025-06-20 18:27:11.944 [INFO][4502] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2 Jun 20 18:27:12.054325 containerd[1891]: 2025-06-20 18:27:11.948 [INFO][4502] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054325 containerd[1891]: 2025-06-20 18:27:11.963 [INFO][4502] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.129/26] block=192.168.45.128/26 handle="k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054325 containerd[1891]: 2025-06-20 18:27:11.963 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.129/26] handle="k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:12.054325 containerd[1891]: 2025-06-20 18:27:11.963 [INFO][4502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:12.054325 containerd[1891]: 2025-06-20 18:27:11.963 [INFO][4502] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.129/26] IPv6=[] ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" HandleID="k8s-pod-network.920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Workload="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" Jun 20 18:27:12.055248 containerd[1891]: 2025-06-20 18:27:11.967 [INFO][4489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Namespace="calico-system" Pod="whisker-5d99c85dcf-q7dd8" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0", GenerateName:"whisker-5d99c85dcf-", Namespace:"calico-system", SelfLink:"", UID:"d9af568d-40f3-4906-9ef0-a42942514bea", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d99c85dcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"", Pod:"whisker-5d99c85dcf-q7dd8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali53a62fbd1cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:12.055248 containerd[1891]: 2025-06-20 18:27:11.968 [INFO][4489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.129/32] ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Namespace="calico-system" Pod="whisker-5d99c85dcf-q7dd8" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" Jun 20 18:27:12.055354 containerd[1891]: 2025-06-20 18:27:11.968 [INFO][4489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53a62fbd1cd ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Namespace="calico-system" Pod="whisker-5d99c85dcf-q7dd8" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" Jun 20 18:27:12.055354 containerd[1891]: 2025-06-20 18:27:12.033 [INFO][4489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Namespace="calico-system" Pod="whisker-5d99c85dcf-q7dd8" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" Jun 20 18:27:12.055384 containerd[1891]: 2025-06-20 18:27:12.033 [INFO][4489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Namespace="calico-system" Pod="whisker-5d99c85dcf-q7dd8" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0", GenerateName:"whisker-5d99c85dcf-", Namespace:"calico-system", SelfLink:"", UID:"d9af568d-40f3-4906-9ef0-a42942514bea", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d99c85dcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2", Pod:"whisker-5d99c85dcf-q7dd8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali53a62fbd1cd", MAC:"76:45:34:72:68:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:12.055420 containerd[1891]: 2025-06-20 18:27:12.051 [INFO][4489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" Namespace="calico-system" Pod="whisker-5d99c85dcf-q7dd8" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-whisker--5d99c85dcf--q7dd8-eth0" Jun 20 18:27:12.153534 containerd[1891]: time="2025-06-20T18:27:12.153500774Z" level=info msg="connecting to shim 920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2" address="unix:///run/containerd/s/17dcae8bbeb9d0a1db2bb8eb52ee243e7b998cad61b79cb749f8a1d3fc190a39" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:12.171389 systemd[1]: Started cri-containerd-920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2.scope - libcontainer container 920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2. Jun 20 18:27:12.205717 containerd[1891]: time="2025-06-20T18:27:12.205690091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d99c85dcf-q7dd8,Uid:d9af568d-40f3-4906-9ef0-a42942514bea,Namespace:calico-system,Attempt:0,} returns sandbox id \"920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2\"" Jun 20 18:27:12.206994 containerd[1891]: time="2025-06-20T18:27:12.206932850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 18:27:12.424524 systemd-networkd[1484]: vxlan.calico: Link UP Jun 20 18:27:12.424529 systemd-networkd[1484]: vxlan.calico: Gained carrier Jun 20 18:27:13.336374 kubelet[3386]: I0620 18:27:13.336337 3386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364325ca-57cf-4846-ab47-2d5582af4009" path="/var/lib/kubelet/pods/364325ca-57cf-4846-ab47-2d5582af4009/volumes" Jun 20 18:27:13.433206 containerd[1891]: time="2025-06-20T18:27:13.433155438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:13.436036 containerd[1891]: time="2025-06-20T18:27:13.436002020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4605623" Jun 20 18:27:13.440686 containerd[1891]: time="2025-06-20T18:27:13.440646028Z" level=info msg="ImageCreate event name:\"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:13.446289 containerd[1891]: time="2025-06-20T18:27:13.446247038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:13.446720 containerd[1891]: time="2025-06-20T18:27:13.446571108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"5974856\" in 1.23961493s" Jun 20 18:27:13.446720 containerd[1891]: time="2025-06-20T18:27:13.446599821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\"" Jun 20 18:27:13.448315 containerd[1891]: time="2025-06-20T18:27:13.448284325Z" level=info msg="CreateContainer within sandbox \"920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 18:27:13.480082 containerd[1891]: time="2025-06-20T18:27:13.479576797Z" level=info msg="Container 9709a002e6c5649922bc2a728e7c89e379a3bd5281e6e89d130aae6fc72670af: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:13.500756 containerd[1891]: time="2025-06-20T18:27:13.500730798Z" level=info msg="CreateContainer within sandbox \"920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9709a002e6c5649922bc2a728e7c89e379a3bd5281e6e89d130aae6fc72670af\"" Jun 20 18:27:13.502053 containerd[1891]: time="2025-06-20T18:27:13.502037087Z" level=info msg="StartContainer for \"9709a002e6c5649922bc2a728e7c89e379a3bd5281e6e89d130aae6fc72670af\"" Jun 20 18:27:13.503298 containerd[1891]: time="2025-06-20T18:27:13.503272558Z" level=info msg="connecting to shim 9709a002e6c5649922bc2a728e7c89e379a3bd5281e6e89d130aae6fc72670af" address="unix:///run/containerd/s/17dcae8bbeb9d0a1db2bb8eb52ee243e7b998cad61b79cb749f8a1d3fc190a39" protocol=ttrpc version=3 Jun 20 18:27:13.513405 systemd-networkd[1484]: vxlan.calico: Gained IPv6LL Jun 20 18:27:13.521194 systemd[1]: Started cri-containerd-9709a002e6c5649922bc2a728e7c89e379a3bd5281e6e89d130aae6fc72670af.scope - libcontainer container 9709a002e6c5649922bc2a728e7c89e379a3bd5281e6e89d130aae6fc72670af. Jun 20 18:27:13.641201 systemd-networkd[1484]: cali53a62fbd1cd: Gained IPv6LL Jun 20 18:27:13.734559 containerd[1891]: time="2025-06-20T18:27:13.734495048Z" level=info msg="StartContainer for \"9709a002e6c5649922bc2a728e7c89e379a3bd5281e6e89d130aae6fc72670af\" returns successfully" Jun 20 18:27:13.737152 containerd[1891]: time="2025-06-20T18:27:13.737117722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 18:27:15.820013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819054990.mount: Deactivated successfully. Jun 20 18:27:15.902180 containerd[1891]: time="2025-06-20T18:27:15.902143679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:15.905013 containerd[1891]: time="2025-06-20T18:27:15.904986021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=30829716" Jun 20 18:27:15.910726 containerd[1891]: time="2025-06-20T18:27:15.910682089Z" level=info msg="ImageCreate event name:\"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:15.915558 containerd[1891]: time="2025-06-20T18:27:15.915508996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:15.916187 containerd[1891]: time="2025-06-20T18:27:15.915860507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"30829546\" in 2.178710616s" Jun 20 18:27:15.916187 containerd[1891]: time="2025-06-20T18:27:15.915885451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\"" Jun 20 18:27:15.917769 containerd[1891]: time="2025-06-20T18:27:15.917747399Z" level=info msg="CreateContainer within sandbox \"920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 18:27:15.951002 containerd[1891]: time="2025-06-20T18:27:15.950962708Z" level=info msg="Container d66fc7d2ad7c3f99f797c1d618b13a479a853b0cf69626bf6a9c730f55e14491: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:15.970103 containerd[1891]: time="2025-06-20T18:27:15.970054581Z" level=info msg="CreateContainer within sandbox \"920448b3e490b7fd62ea54c7685e0be10ead1eaa39be4dc18cc972cf503c7fa2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d66fc7d2ad7c3f99f797c1d618b13a479a853b0cf69626bf6a9c730f55e14491\"" Jun 20 18:27:15.971089 containerd[1891]: time="2025-06-20T18:27:15.970622288Z" level=info msg="StartContainer for \"d66fc7d2ad7c3f99f797c1d618b13a479a853b0cf69626bf6a9c730f55e14491\"" Jun 20 18:27:15.972054 containerd[1891]: time="2025-06-20T18:27:15.972029763Z" level=info msg="connecting to shim d66fc7d2ad7c3f99f797c1d618b13a479a853b0cf69626bf6a9c730f55e14491" address="unix:///run/containerd/s/17dcae8bbeb9d0a1db2bb8eb52ee243e7b998cad61b79cb749f8a1d3fc190a39" protocol=ttrpc version=3 Jun 20 18:27:15.993202 systemd[1]: Started cri-containerd-d66fc7d2ad7c3f99f797c1d618b13a479a853b0cf69626bf6a9c730f55e14491.scope - libcontainer container d66fc7d2ad7c3f99f797c1d618b13a479a853b0cf69626bf6a9c730f55e14491. Jun 20 18:27:16.025961 containerd[1891]: time="2025-06-20T18:27:16.025891231Z" level=info msg="StartContainer for \"d66fc7d2ad7c3f99f797c1d618b13a479a853b0cf69626bf6a9c730f55e14491\" returns successfully" Jun 20 18:27:16.234549 kubelet[3386]: I0620 18:27:16.234303 3386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:27:16.282678 containerd[1891]: time="2025-06-20T18:27:16.282644093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\" id:\"e9b211277d72721d16919a0af1e5fc38e201fe90ce63627c86484dfce4d3dda7\" pid:4756 exited_at:{seconds:1750444036 nanos:282120795}" Jun 20 18:27:16.337136 containerd[1891]: time="2025-06-20T18:27:16.336812718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-86p6k,Uid:7f743136-252c-4fc8-8349-f6f235c545b3,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:16.342322 containerd[1891]: time="2025-06-20T18:27:16.342289022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\" id:\"b28d17b08a5aa7b1b52c16e23f7f054082be1df4e8ca992944d67324fcc41fd6\" pid:4781 exited_at:{seconds:1750444036 nanos:341774540}" Jun 20 18:27:16.434341 systemd-networkd[1484]: califb33d4c23fc: Link UP Jun 20 18:27:16.436154 systemd-networkd[1484]: califb33d4c23fc: Gained carrier Jun 20 18:27:16.453096 containerd[1891]: 2025-06-20 18:27:16.374 [INFO][4794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0 goldmane-dc7b455cb- calico-system 7f743136-252c-4fc8-8349-f6f235c545b3 787 0 2025-06-20 18:26:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:dc7b455cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.0-a-c937e4b650 goldmane-dc7b455cb-86p6k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califb33d4c23fc [] [] }} ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Namespace="calico-system" Pod="goldmane-dc7b455cb-86p6k" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-" Jun 20 18:27:16.453096 containerd[1891]: 2025-06-20 18:27:16.374 [INFO][4794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Namespace="calico-system" Pod="goldmane-dc7b455cb-86p6k" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" Jun 20 18:27:16.453096 containerd[1891]: 2025-06-20 18:27:16.392 [INFO][4806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" HandleID="k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Workload="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.392 [INFO][4806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" HandleID="k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Workload="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-c937e4b650", "pod":"goldmane-dc7b455cb-86p6k", "timestamp":"2025-06-20 18:27:16.392280593 +0000 UTC"}, Hostname:"ci-4344.1.0-a-c937e4b650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.392 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.392 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.392 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-c937e4b650' Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.399 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.406 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.409 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.411 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453387 containerd[1891]: 2025-06-20 18:27:16.412 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453561 containerd[1891]: 2025-06-20 18:27:16.412 [INFO][4806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453561 containerd[1891]: 2025-06-20 18:27:16.414 [INFO][4806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c Jun 20 18:27:16.453561 containerd[1891]: 2025-06-20 18:27:16.418 [INFO][4806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453561 containerd[1891]: 2025-06-20 18:27:16.428 [INFO][4806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.130/26] block=192.168.45.128/26 handle="k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453561 containerd[1891]: 2025-06-20 18:27:16.428 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.130/26] handle="k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:16.453561 containerd[1891]: 2025-06-20 18:27:16.428 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:16.453561 containerd[1891]: 2025-06-20 18:27:16.428 [INFO][4806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.130/26] IPv6=[] ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" HandleID="k8s-pod-network.c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Workload="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" Jun 20 18:27:16.453661 containerd[1891]: 2025-06-20 18:27:16.430 [INFO][4794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Namespace="calico-system" Pod="goldmane-dc7b455cb-86p6k" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"7f743136-252c-4fc8-8349-f6f235c545b3", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"", Pod:"goldmane-dc7b455cb-86p6k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califb33d4c23fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:16.453661 containerd[1891]: 2025-06-20 18:27:16.430 [INFO][4794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.130/32] ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Namespace="calico-system" Pod="goldmane-dc7b455cb-86p6k" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" Jun 20 18:27:16.453738 containerd[1891]: 2025-06-20 18:27:16.430 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb33d4c23fc ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Namespace="calico-system" Pod="goldmane-dc7b455cb-86p6k" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" Jun 20 18:27:16.453738 containerd[1891]: 2025-06-20 18:27:16.436 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Namespace="calico-system" Pod="goldmane-dc7b455cb-86p6k" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" Jun 20 18:27:16.453767 containerd[1891]: 2025-06-20 18:27:16.437 [INFO][4794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Namespace="calico-system" Pod="goldmane-dc7b455cb-86p6k" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"7f743136-252c-4fc8-8349-f6f235c545b3", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c", Pod:"goldmane-dc7b455cb-86p6k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califb33d4c23fc", MAC:"ca:a1:1b:28:a4:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:16.453800 containerd[1891]: 2025-06-20 18:27:16.451 [INFO][4794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" Namespace="calico-system" Pod="goldmane-dc7b455cb-86p6k" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-goldmane--dc7b455cb--86p6k-eth0" Jun 20 18:27:16.486807 kubelet[3386]: I0620 18:27:16.485692 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d99c85dcf-q7dd8" podStartSLOduration=1.775973546 podStartE2EDuration="5.485677769s" podCreationTimestamp="2025-06-20 18:27:11 +0000 UTC" firstStartedPulling="2025-06-20 18:27:12.206748943 +0000 UTC m=+36.946366085" lastFinishedPulling="2025-06-20 18:27:15.916453166 +0000 UTC m=+40.656070308" observedRunningTime="2025-06-20 18:27:16.485525782 +0000 UTC m=+41.225142924" watchObservedRunningTime="2025-06-20 18:27:16.485677769 +0000 UTC m=+41.225294911" Jun 20 18:27:16.514521 containerd[1891]: time="2025-06-20T18:27:16.514426546Z" level=info msg="connecting to shim c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c" address="unix:///run/containerd/s/b9e3e00cdf576d34a1759e00860ad3f8fa78006580e5f690c5efbb6388444836" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:16.529195 systemd[1]: Started cri-containerd-c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c.scope - libcontainer container c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c. Jun 20 18:27:16.557964 containerd[1891]: time="2025-06-20T18:27:16.557927858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-86p6k,Uid:7f743136-252c-4fc8-8349-f6f235c545b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c\"" Jun 20 18:27:16.560354 containerd[1891]: time="2025-06-20T18:27:16.560152700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 18:27:17.801219 systemd-networkd[1484]: califb33d4c23fc: Gained IPv6LL Jun 20 18:27:18.335830 containerd[1891]: time="2025-06-20T18:27:18.335579501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bf6fb6d-r7clf,Uid:83c44774-a76a-4f8b-9061-9919c8d09dde,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:18.335830 containerd[1891]: time="2025-06-20T18:27:18.335753256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh6nt,Uid:d87a4850-5e3c-4d66-a5fc-1cb820fe465f,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:18.336328 containerd[1891]: time="2025-06-20T18:27:18.336304747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b8dd894f-plbdq,Uid:2c9fa1bb-fb94-4eeb-a246-510a247052b1,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:18.398413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583630188.mount: Deactivated successfully. Jun 20 18:27:18.560918 systemd-networkd[1484]: cali84926b230b2: Link UP Jun 20 18:27:18.561515 systemd-networkd[1484]: cali84926b230b2: Gained carrier Jun 20 18:27:18.589079 containerd[1891]: 2025-06-20 18:27:18.479 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0 calico-apiserver-68bf6fb6d- calico-apiserver 83c44774-a76a-4f8b-9061-9919c8d09dde 794 0 2025-06-20 18:26:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68bf6fb6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-c937e4b650 calico-apiserver-68bf6fb6d-r7clf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali84926b230b2 [] [] }} ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-r7clf" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-" Jun 20 18:27:18.589079 containerd[1891]: 2025-06-20 18:27:18.480 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-r7clf" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" Jun 20 18:27:18.589079 containerd[1891]: 2025-06-20 18:27:18.503 [INFO][4896] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" HandleID="k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.504 [INFO][4896] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" HandleID="k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bd90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-c937e4b650", "pod":"calico-apiserver-68bf6fb6d-r7clf", "timestamp":"2025-06-20 18:27:18.503938238 +0000 UTC"}, Hostname:"ci-4344.1.0-a-c937e4b650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.504 [INFO][4896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.504 [INFO][4896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.504 [INFO][4896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-c937e4b650' Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.510 [INFO][4896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.516 [INFO][4896] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.520 [INFO][4896] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.523 [INFO][4896] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589226 containerd[1891]: 2025-06-20 18:27:18.526 [INFO][4896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589360 containerd[1891]: 2025-06-20 18:27:18.527 [INFO][4896] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589360 containerd[1891]: 2025-06-20 18:27:18.530 [INFO][4896] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402 Jun 20 18:27:18.589360 containerd[1891]: 2025-06-20 18:27:18.538 [INFO][4896] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589360 containerd[1891]: 2025-06-20 18:27:18.549 [INFO][4896] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.131/26] block=192.168.45.128/26 handle="k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589360 containerd[1891]: 2025-06-20 18:27:18.549 [INFO][4896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.131/26] handle="k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.589360 containerd[1891]: 2025-06-20 18:27:18.549 [INFO][4896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:18.589360 containerd[1891]: 2025-06-20 18:27:18.550 [INFO][4896] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.131/26] IPv6=[] ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" HandleID="k8s-pod-network.8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" Jun 20 18:27:18.590214 containerd[1891]: 2025-06-20 18:27:18.554 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-r7clf" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0", GenerateName:"calico-apiserver-68bf6fb6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"83c44774-a76a-4f8b-9061-9919c8d09dde", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bf6fb6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"", Pod:"calico-apiserver-68bf6fb6d-r7clf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84926b230b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:18.590261 containerd[1891]: 2025-06-20 18:27:18.554 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.131/32] ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-r7clf" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" Jun 20 18:27:18.590261 containerd[1891]: 2025-06-20 18:27:18.554 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84926b230b2 ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-r7clf" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" Jun 20 18:27:18.590261 containerd[1891]: 2025-06-20 18:27:18.558 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-r7clf" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" Jun 20 18:27:18.590311 containerd[1891]: 2025-06-20 18:27:18.559 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-r7clf" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0", GenerateName:"calico-apiserver-68bf6fb6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"83c44774-a76a-4f8b-9061-9919c8d09dde", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bf6fb6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402", Pod:"calico-apiserver-68bf6fb6d-r7clf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84926b230b2", MAC:"b2:dd:dc:17:9b:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:18.590345 containerd[1891]: 2025-06-20 18:27:18.584 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-r7clf" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--r7clf-eth0" Jun 20 18:27:18.663970 containerd[1891]: time="2025-06-20T18:27:18.663660050Z" level=info msg="connecting to shim 8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402" address="unix:///run/containerd/s/463b758389f27337d115094b999a2404c4bf40e9094dafe80e9eb39a280e8c3d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:18.687313 systemd[1]: Started cri-containerd-8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402.scope - libcontainer container 8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402. Jun 20 18:27:18.703049 systemd-networkd[1484]: calie479b2d86a6: Link UP Jun 20 18:27:18.704476 systemd-networkd[1484]: calie479b2d86a6: Gained carrier Jun 20 18:27:18.728570 containerd[1891]: 2025-06-20 18:27:18.539 [INFO][4908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0 calico-kube-controllers-76b8dd894f- calico-system 2c9fa1bb-fb94-4eeb-a246-510a247052b1 791 0 2025-06-20 18:26:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76b8dd894f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.0-a-c937e4b650 calico-kube-controllers-76b8dd894f-plbdq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie479b2d86a6 [] [] }} ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Namespace="calico-system" Pod="calico-kube-controllers-76b8dd894f-plbdq" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-" Jun 20 18:27:18.728570 containerd[1891]: 2025-06-20 18:27:18.541 [INFO][4908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Namespace="calico-system" Pod="calico-kube-controllers-76b8dd894f-plbdq" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" Jun 20 18:27:18.728570 containerd[1891]: 2025-06-20 18:27:18.610 [INFO][4933] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" HandleID="k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.611 [INFO][4933] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" HandleID="k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000346690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-c937e4b650", "pod":"calico-kube-controllers-76b8dd894f-plbdq", "timestamp":"2025-06-20 18:27:18.610799768 +0000 UTC"}, Hostname:"ci-4344.1.0-a-c937e4b650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.611 [INFO][4933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.611 [INFO][4933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.611 [INFO][4933] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-c937e4b650' Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.621 [INFO][4933] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.626 [INFO][4933] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.631 [INFO][4933] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.633 [INFO][4933] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728781 containerd[1891]: 2025-06-20 18:27:18.636 [INFO][4933] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728926 containerd[1891]: 2025-06-20 18:27:18.636 [INFO][4933] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728926 containerd[1891]: 2025-06-20 18:27:18.639 [INFO][4933] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5 Jun 20 18:27:18.728926 containerd[1891]: 2025-06-20 18:27:18.676 [INFO][4933] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728926 containerd[1891]: 2025-06-20 18:27:18.693 [INFO][4933] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.132/26] block=192.168.45.128/26 handle="k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728926 containerd[1891]: 2025-06-20 18:27:18.693 [INFO][4933] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.132/26] handle="k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.728926 containerd[1891]: 2025-06-20 18:27:18.693 [INFO][4933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:18.728926 containerd[1891]: 2025-06-20 18:27:18.693 [INFO][4933] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.132/26] IPv6=[] ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" HandleID="k8s-pod-network.0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" Jun 20 18:27:18.729021 containerd[1891]: 2025-06-20 18:27:18.699 [INFO][4908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Namespace="calico-system" Pod="calico-kube-controllers-76b8dd894f-plbdq" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0", GenerateName:"calico-kube-controllers-76b8dd894f-", Namespace:"calico-system", SelfLink:"", UID:"2c9fa1bb-fb94-4eeb-a246-510a247052b1", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b8dd894f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"", Pod:"calico-kube-controllers-76b8dd894f-plbdq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie479b2d86a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:18.729056 containerd[1891]: 2025-06-20 18:27:18.699 [INFO][4908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.132/32] ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Namespace="calico-system" Pod="calico-kube-controllers-76b8dd894f-plbdq" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" Jun 20 18:27:18.729056 containerd[1891]: 2025-06-20 18:27:18.699 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie479b2d86a6 ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Namespace="calico-system" Pod="calico-kube-controllers-76b8dd894f-plbdq" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" Jun 20 18:27:18.729056 containerd[1891]: 2025-06-20 18:27:18.706 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Namespace="calico-system" Pod="calico-kube-controllers-76b8dd894f-plbdq" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" Jun 20 18:27:18.729116 containerd[1891]: 2025-06-20 18:27:18.707 [INFO][4908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Namespace="calico-system" Pod="calico-kube-controllers-76b8dd894f-plbdq" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0", GenerateName:"calico-kube-controllers-76b8dd894f-", Namespace:"calico-system", SelfLink:"", UID:"2c9fa1bb-fb94-4eeb-a246-510a247052b1", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b8dd894f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5", Pod:"calico-kube-controllers-76b8dd894f-plbdq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie479b2d86a6", MAC:"aa:e6:51:f3:90:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:18.729154 containerd[1891]: 2025-06-20 18:27:18.726 [INFO][4908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" Namespace="calico-system" Pod="calico-kube-controllers-76b8dd894f-plbdq" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--kube--controllers--76b8dd894f--plbdq-eth0" Jun 20 18:27:18.773874 systemd-networkd[1484]: cali03bfef9d0cb: Link UP Jun 20 18:27:18.774015 systemd-networkd[1484]: cali03bfef9d0cb: Gained carrier Jun 20 18:27:18.775363 containerd[1891]: time="2025-06-20T18:27:18.775145732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bf6fb6d-r7clf,Uid:83c44774-a76a-4f8b-9061-9919c8d09dde,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402\"" Jun 20 18:27:18.795793 containerd[1891]: 2025-06-20 18:27:18.553 [INFO][4900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0 csi-node-driver- calico-system d87a4850-5e3c-4d66-a5fc-1cb820fe465f 682 0 2025-06-20 18:26:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:896496fb5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.0-a-c937e4b650 csi-node-driver-zh6nt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali03bfef9d0cb [] [] }} ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Namespace="calico-system" Pod="csi-node-driver-zh6nt" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-" Jun 20 18:27:18.795793 containerd[1891]: 2025-06-20 18:27:18.554 [INFO][4900] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Namespace="calico-system" Pod="csi-node-driver-zh6nt" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" Jun 20 18:27:18.795793 containerd[1891]: 2025-06-20 18:27:18.612 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" HandleID="k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Workload="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.613 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" HandleID="k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Workload="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-c937e4b650", "pod":"csi-node-driver-zh6nt", "timestamp":"2025-06-20 18:27:18.61256917 +0000 UTC"}, Hostname:"ci-4344.1.0-a-c937e4b650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.613 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.693 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.693 [INFO][4940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-c937e4b650' Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.722 [INFO][4940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.731 [INFO][4940] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.739 [INFO][4940] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.741 [INFO][4940] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796128 containerd[1891]: 2025-06-20 18:27:18.743 [INFO][4940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796559 containerd[1891]: 2025-06-20 18:27:18.743 [INFO][4940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796559 containerd[1891]: 2025-06-20 18:27:18.744 [INFO][4940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222 Jun 20 18:27:18.796559 containerd[1891]: 2025-06-20 18:27:18.754 [INFO][4940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796559 containerd[1891]: 2025-06-20 18:27:18.767 [INFO][4940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.133/26] block=192.168.45.128/26 handle="k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796559 containerd[1891]: 2025-06-20 18:27:18.767 [INFO][4940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.133/26] handle="k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:18.796559 containerd[1891]: 2025-06-20 18:27:18.767 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:18.796559 containerd[1891]: 2025-06-20 18:27:18.767 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.133/26] IPv6=[] ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" HandleID="k8s-pod-network.90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Workload="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" Jun 20 18:27:18.796918 containerd[1891]: 2025-06-20 18:27:18.770 [INFO][4900] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Namespace="calico-system" Pod="csi-node-driver-zh6nt" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d87a4850-5e3c-4d66-a5fc-1cb820fe465f", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"", Pod:"csi-node-driver-zh6nt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03bfef9d0cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:18.796984 containerd[1891]: 2025-06-20 18:27:18.770 [INFO][4900] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.133/32] ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Namespace="calico-system" Pod="csi-node-driver-zh6nt" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" Jun 20 18:27:18.796984 containerd[1891]: 2025-06-20 18:27:18.770 [INFO][4900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03bfef9d0cb ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Namespace="calico-system" Pod="csi-node-driver-zh6nt" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" Jun 20 18:27:18.796984 containerd[1891]: 2025-06-20 18:27:18.772 [INFO][4900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Namespace="calico-system" Pod="csi-node-driver-zh6nt" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" Jun 20 18:27:18.797038 containerd[1891]: 2025-06-20 18:27:18.772 [INFO][4900] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Namespace="calico-system" Pod="csi-node-driver-zh6nt" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d87a4850-5e3c-4d66-a5fc-1cb820fe465f", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222", Pod:"csi-node-driver-zh6nt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali03bfef9d0cb", MAC:"3e:90:c5:ec:31:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:18.797099 containerd[1891]: 2025-06-20 18:27:18.788 [INFO][4900] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" Namespace="calico-system" Pod="csi-node-driver-zh6nt" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-csi--node--driver--zh6nt-eth0" Jun 20 18:27:18.809093 containerd[1891]: time="2025-06-20T18:27:18.809027836Z" level=info msg="connecting to shim 0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5" address="unix:///run/containerd/s/c1676b20f5d98c26157377e4609a6aeb71de1fd8a6edd8ef20ffb0268a7aebc0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:18.834188 systemd[1]: Started cri-containerd-0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5.scope - libcontainer container 0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5. Jun 20 18:27:18.851911 containerd[1891]: time="2025-06-20T18:27:18.851726364Z" level=info msg="connecting to shim 90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222" address="unix:///run/containerd/s/faa7c1ce2e22a109f9c191ac4e3e24959fe1f5a28ba8e564124594921e09c428" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:18.888350 systemd[1]: Started cri-containerd-90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222.scope - libcontainer container 90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222. Jun 20 18:27:19.217425 containerd[1891]: time="2025-06-20T18:27:19.217313790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b8dd894f-plbdq,Uid:2c9fa1bb-fb94-4eeb-a246-510a247052b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5\"" Jun 20 18:27:19.335464 containerd[1891]: time="2025-06-20T18:27:19.335397326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hsf8m,Uid:223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:19.341217 containerd[1891]: time="2025-06-20T18:27:19.341192597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh6nt,Uid:d87a4850-5e3c-4d66-a5fc-1cb820fe465f,Namespace:calico-system,Attempt:0,} returns sandbox id \"90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222\"" Jun 20 18:27:19.439635 containerd[1891]: time="2025-06-20T18:27:19.439604349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:19.443138 containerd[1891]: time="2025-06-20T18:27:19.443110040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=61832718" Jun 20 18:27:19.450013 containerd[1891]: time="2025-06-20T18:27:19.449965611Z" level=info msg="ImageCreate event name:\"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:19.458766 containerd[1891]: time="2025-06-20T18:27:19.458739915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:19.459301 containerd[1891]: time="2025-06-20T18:27:19.459137435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"61832564\" in 2.898957167s" Jun 20 18:27:19.459301 containerd[1891]: time="2025-06-20T18:27:19.459235565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\"" Jun 20 18:27:19.461246 containerd[1891]: time="2025-06-20T18:27:19.461228899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 18:27:19.462741 containerd[1891]: time="2025-06-20T18:27:19.461932328Z" level=info msg="CreateContainer within sandbox \"c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 18:27:19.480329 systemd-networkd[1484]: cali40baee72a31: Link UP Jun 20 18:27:19.481652 systemd-networkd[1484]: cali40baee72a31: Gained carrier Jun 20 18:27:19.495604 containerd[1891]: time="2025-06-20T18:27:19.495403664Z" level=info msg="Container ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:19.507684 containerd[1891]: 2025-06-20 18:27:19.415 [INFO][5115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0 coredns-7c65d6cfc9- kube-system 223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80 783 0 2025-06-20 18:26:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-c937e4b650 coredns-7c65d6cfc9-hsf8m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40baee72a31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hsf8m" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-" Jun 20 18:27:19.507684 containerd[1891]: 2025-06-20 18:27:19.415 [INFO][5115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hsf8m" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" Jun 20 18:27:19.507684 containerd[1891]: 2025-06-20 18:27:19.431 [INFO][5131] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" HandleID="k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Workload="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.432 [INFO][5131] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" HandleID="k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Workload="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1070), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-c937e4b650", "pod":"coredns-7c65d6cfc9-hsf8m", "timestamp":"2025-06-20 18:27:19.43197474 +0000 UTC"}, Hostname:"ci-4344.1.0-a-c937e4b650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.432 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.432 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.432 [INFO][5131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-c937e4b650' Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.442 [INFO][5131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.446 [INFO][5131] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.450 [INFO][5131] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.451 [INFO][5131] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.507977 containerd[1891]: 2025-06-20 18:27:19.453 [INFO][5131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.508361 containerd[1891]: 2025-06-20 18:27:19.453 [INFO][5131] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.508361 containerd[1891]: 2025-06-20 18:27:19.454 [INFO][5131] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8 Jun 20 18:27:19.508361 containerd[1891]: 2025-06-20 18:27:19.463 [INFO][5131] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.508361 containerd[1891]: 2025-06-20 18:27:19.473 [INFO][5131] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.134/26] block=192.168.45.128/26 handle="k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.508361 containerd[1891]: 2025-06-20 18:27:19.474 [INFO][5131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.134/26] handle="k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:19.508361 containerd[1891]: 2025-06-20 18:27:19.474 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:19.508361 containerd[1891]: 2025-06-20 18:27:19.474 [INFO][5131] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.134/26] IPv6=[] ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" HandleID="k8s-pod-network.37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Workload="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" Jun 20 18:27:19.508964 containerd[1891]: 2025-06-20 18:27:19.475 [INFO][5115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hsf8m" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"", Pod:"coredns-7c65d6cfc9-hsf8m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40baee72a31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:19.508964 containerd[1891]: 2025-06-20 18:27:19.476 [INFO][5115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.134/32] ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hsf8m" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" Jun 20 18:27:19.508964 containerd[1891]: 2025-06-20 18:27:19.476 [INFO][5115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40baee72a31 ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hsf8m" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" Jun 20 18:27:19.508964 containerd[1891]: 2025-06-20 18:27:19.481 [INFO][5115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hsf8m" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" Jun 20 18:27:19.508964 containerd[1891]: 2025-06-20 18:27:19.483 [INFO][5115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hsf8m" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8", Pod:"coredns-7c65d6cfc9-hsf8m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40baee72a31", MAC:"ba:06:fd:ed:48:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:19.508964 containerd[1891]: 2025-06-20 18:27:19.503 [INFO][5115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hsf8m" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--hsf8m-eth0" Jun 20 18:27:19.519417 containerd[1891]: time="2025-06-20T18:27:19.519389986Z" level=info msg="CreateContainer within sandbox \"c8d0e7c9f427c1b599882311f851c6e3196d79fa5bbe2d2bc85abec4c48b671c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\"" Jun 20 18:27:19.519892 containerd[1891]: time="2025-06-20T18:27:19.519872347Z" level=info msg="StartContainer for \"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\"" Jun 20 18:27:19.520943 containerd[1891]: time="2025-06-20T18:27:19.520923343Z" level=info msg="connecting to shim ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5" address="unix:///run/containerd/s/b9e3e00cdf576d34a1759e00860ad3f8fa78006580e5f690c5efbb6388444836" protocol=ttrpc version=3 Jun 20 18:27:19.541196 systemd[1]: Started cri-containerd-ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5.scope - libcontainer container ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5. Jun 20 18:27:19.567239 containerd[1891]: time="2025-06-20T18:27:19.567176955Z" level=info msg="connecting to shim 37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8" address="unix:///run/containerd/s/19ffd3b82229ba1fa478d33decccd19064c105c5f6cd122bf492fd300daf4e5f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:19.594155 containerd[1891]: time="2025-06-20T18:27:19.594049909Z" level=info msg="StartContainer for \"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" returns successfully" Jun 20 18:27:19.595198 systemd[1]: Started cri-containerd-37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8.scope - libcontainer container 37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8. Jun 20 18:27:19.634526 containerd[1891]: time="2025-06-20T18:27:19.634501562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hsf8m,Uid:223ca0a8-aaba-42fb-83b2-4eeb9dd6fc80,Namespace:kube-system,Attempt:0,} returns sandbox id \"37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8\"" Jun 20 18:27:19.639615 containerd[1891]: time="2025-06-20T18:27:19.639417416Z" level=info msg="CreateContainer within sandbox \"37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:27:19.672110 containerd[1891]: time="2025-06-20T18:27:19.671838323Z" level=info msg="Container 6264f34fb1a00d6fd2e65450e00d3b2d526153c15a79d344c17324235008bb6c: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:19.692382 containerd[1891]: time="2025-06-20T18:27:19.692349891Z" level=info msg="CreateContainer within sandbox \"37e97901fe26591117935ede6406f07e9889107de473cd7a508036e31c291fa8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6264f34fb1a00d6fd2e65450e00d3b2d526153c15a79d344c17324235008bb6c\"" Jun 20 18:27:19.692878 containerd[1891]: time="2025-06-20T18:27:19.692767139Z" level=info msg="StartContainer for \"6264f34fb1a00d6fd2e65450e00d3b2d526153c15a79d344c17324235008bb6c\"" Jun 20 18:27:19.693816 containerd[1891]: time="2025-06-20T18:27:19.693777894Z" level=info msg="connecting to shim 6264f34fb1a00d6fd2e65450e00d3b2d526153c15a79d344c17324235008bb6c" address="unix:///run/containerd/s/19ffd3b82229ba1fa478d33decccd19064c105c5f6cd122bf492fd300daf4e5f" protocol=ttrpc version=3 Jun 20 18:27:19.707178 systemd[1]: Started cri-containerd-6264f34fb1a00d6fd2e65450e00d3b2d526153c15a79d344c17324235008bb6c.scope - libcontainer container 6264f34fb1a00d6fd2e65450e00d3b2d526153c15a79d344c17324235008bb6c. Jun 20 18:27:19.732759 containerd[1891]: time="2025-06-20T18:27:19.732193028Z" level=info msg="StartContainer for \"6264f34fb1a00d6fd2e65450e00d3b2d526153c15a79d344c17324235008bb6c\" returns successfully" Jun 20 18:27:19.785406 systemd-networkd[1484]: cali84926b230b2: Gained IPv6LL Jun 20 18:27:19.913196 systemd-networkd[1484]: calie479b2d86a6: Gained IPv6LL Jun 20 18:27:20.169195 systemd-networkd[1484]: cali03bfef9d0cb: Gained IPv6LL Jun 20 18:27:20.335668 containerd[1891]: time="2025-06-20T18:27:20.335625847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t4bch,Uid:762b512e-7ebf-40d7-a3fb-fa664d1bb2bc,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:20.548974 kubelet[3386]: I0620 18:27:20.548697 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hsf8m" podStartSLOduration=38.548427554 podStartE2EDuration="38.548427554s" podCreationTimestamp="2025-06-20 18:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:20.520927308 +0000 UTC m=+45.260544458" watchObservedRunningTime="2025-06-20 18:27:20.548427554 +0000 UTC m=+45.288044696" Jun 20 18:27:20.615617 containerd[1891]: time="2025-06-20T18:27:20.615576637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"2fc7fd5960d66a2c080946f7e5860737fd2cab29f64dc66c58ffdfce3a102e55\" pid:5271 exit_status:1 exited_at:{seconds:1750444040 nanos:615209350}" Jun 20 18:27:20.617576 systemd-networkd[1484]: cali40baee72a31: Gained IPv6LL Jun 20 18:27:20.899362 systemd-networkd[1484]: calida3d240ebd8: Link UP Jun 20 18:27:20.900346 systemd-networkd[1484]: calida3d240ebd8: Gained carrier Jun 20 18:27:20.928723 kubelet[3386]: I0620 18:27:20.928668 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-dc7b455cb-86p6k" podStartSLOduration=24.027772928 podStartE2EDuration="26.928648867s" podCreationTimestamp="2025-06-20 18:26:54 +0000 UTC" firstStartedPulling="2025-06-20 18:27:16.559416238 +0000 UTC m=+41.299033380" lastFinishedPulling="2025-06-20 18:27:19.460292169 +0000 UTC m=+44.199909319" observedRunningTime="2025-06-20 18:27:20.550175339 +0000 UTC m=+45.289792481" watchObservedRunningTime="2025-06-20 18:27:20.928648867 +0000 UTC m=+45.668266009" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.829 [INFO][5286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0 coredns-7c65d6cfc9- kube-system 762b512e-7ebf-40d7-a3fb-fa664d1bb2bc 792 0 2025-06-20 18:26:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-c937e4b650 coredns-7c65d6cfc9-t4bch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida3d240ebd8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4bch" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.829 [INFO][5286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4bch" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.853 [INFO][5299] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" HandleID="k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Workload="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.854 [INFO][5299] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" HandleID="k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Workload="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-c937e4b650", "pod":"coredns-7c65d6cfc9-t4bch", "timestamp":"2025-06-20 18:27:20.853412749 +0000 UTC"}, Hostname:"ci-4344.1.0-a-c937e4b650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.854 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.854 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.854 [INFO][5299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-c937e4b650' Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.862 [INFO][5299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.867 [INFO][5299] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.871 [INFO][5299] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.873 [INFO][5299] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.875 [INFO][5299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.875 [INFO][5299] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.876 [INFO][5299] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271 Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.880 [INFO][5299] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.893 [INFO][5299] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.135/26] block=192.168.45.128/26 handle="k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.893 [INFO][5299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.135/26] handle="k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.893 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:20.931159 containerd[1891]: 2025-06-20 18:27:20.893 [INFO][5299] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.135/26] IPv6=[] ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" HandleID="k8s-pod-network.62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Workload="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" Jun 20 18:27:20.932485 containerd[1891]: 2025-06-20 18:27:20.895 [INFO][5286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4bch" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"762b512e-7ebf-40d7-a3fb-fa664d1bb2bc", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"", Pod:"coredns-7c65d6cfc9-t4bch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida3d240ebd8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:20.932485 containerd[1891]: 2025-06-20 18:27:20.896 [INFO][5286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.135/32] ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4bch" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" Jun 20 18:27:20.932485 containerd[1891]: 2025-06-20 18:27:20.896 [INFO][5286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida3d240ebd8 ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4bch" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" Jun 20 18:27:20.932485 containerd[1891]: 2025-06-20 18:27:20.901 [INFO][5286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4bch" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" Jun 20 18:27:20.932485 containerd[1891]: 2025-06-20 18:27:20.903 [INFO][5286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4bch" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"762b512e-7ebf-40d7-a3fb-fa664d1bb2bc", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271", Pod:"coredns-7c65d6cfc9-t4bch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida3d240ebd8", MAC:"0e:c6:9d:6a:e4:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:20.932485 containerd[1891]: 2025-06-20 18:27:20.928 [INFO][5286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t4bch" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-coredns--7c65d6cfc9--t4bch-eth0" Jun 20 18:27:21.004445 containerd[1891]: time="2025-06-20T18:27:21.004232351Z" level=info msg="connecting to shim 62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271" address="unix:///run/containerd/s/f0a417785d2bdb267ae1d9bd6a4adecb17ff83ecfd81c62ce875c1b5d11019b2" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:21.024244 systemd[1]: Started cri-containerd-62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271.scope - libcontainer container 62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271. Jun 20 18:27:21.063867 containerd[1891]: time="2025-06-20T18:27:21.063839746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t4bch,Uid:762b512e-7ebf-40d7-a3fb-fa664d1bb2bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271\"" Jun 20 18:27:21.067571 containerd[1891]: time="2025-06-20T18:27:21.067513633Z" level=info msg="CreateContainer within sandbox \"62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:27:21.097468 containerd[1891]: time="2025-06-20T18:27:21.097430468Z" level=info msg="Container f0d9051c27a1b63fcbd12013d433fb2652f828b4d86f902d818ef0cb766c934c: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:21.115424 containerd[1891]: time="2025-06-20T18:27:21.115385371Z" level=info msg="CreateContainer within sandbox \"62a7e6b1f1f6c55515494a421475c736473c0b606a7fc5e89092186f7e94e271\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0d9051c27a1b63fcbd12013d433fb2652f828b4d86f902d818ef0cb766c934c\"" Jun 20 18:27:21.116472 containerd[1891]: time="2025-06-20T18:27:21.116179946Z" level=info msg="StartContainer for \"f0d9051c27a1b63fcbd12013d433fb2652f828b4d86f902d818ef0cb766c934c\"" Jun 20 18:27:21.118315 containerd[1891]: time="2025-06-20T18:27:21.118274539Z" level=info msg="connecting to shim f0d9051c27a1b63fcbd12013d433fb2652f828b4d86f902d818ef0cb766c934c" address="unix:///run/containerd/s/f0a417785d2bdb267ae1d9bd6a4adecb17ff83ecfd81c62ce875c1b5d11019b2" protocol=ttrpc version=3 Jun 20 18:27:21.140207 systemd[1]: Started cri-containerd-f0d9051c27a1b63fcbd12013d433fb2652f828b4d86f902d818ef0cb766c934c.scope - libcontainer container f0d9051c27a1b63fcbd12013d433fb2652f828b4d86f902d818ef0cb766c934c. Jun 20 18:27:21.170865 containerd[1891]: time="2025-06-20T18:27:21.169910173Z" level=info msg="StartContainer for \"f0d9051c27a1b63fcbd12013d433fb2652f828b4d86f902d818ef0cb766c934c\" returns successfully" Jun 20 18:27:21.336727 containerd[1891]: time="2025-06-20T18:27:21.336654055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bf6fb6d-c75mx,Uid:dc05d7a5-d79d-4901-9734-e49ba246cf63,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:21.435992 systemd-networkd[1484]: cali4964372ec5b: Link UP Jun 20 18:27:21.437246 systemd-networkd[1484]: cali4964372ec5b: Gained carrier Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.376 [INFO][5393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0 calico-apiserver-68bf6fb6d- calico-apiserver dc05d7a5-d79d-4901-9734-e49ba246cf63 793 0 2025-06-20 18:26:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68bf6fb6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-c937e4b650 calico-apiserver-68bf6fb6d-c75mx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4964372ec5b [] [] }} ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-c75mx" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.376 [INFO][5393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-c75mx" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.396 [INFO][5406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" HandleID="k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.396 [INFO][5406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" HandleID="k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b060), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-c937e4b650", "pod":"calico-apiserver-68bf6fb6d-c75mx", "timestamp":"2025-06-20 18:27:21.396631818 +0000 UTC"}, Hostname:"ci-4344.1.0-a-c937e4b650", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.396 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.396 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.396 [INFO][5406] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-c937e4b650' Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.402 [INFO][5406] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.405 [INFO][5406] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.409 [INFO][5406] ipam/ipam.go 511: Trying affinity for 192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.411 [INFO][5406] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.412 [INFO][5406] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.128/26 host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.412 [INFO][5406] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.45.128/26 handle="k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.414 [INFO][5406] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3 Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.418 [INFO][5406] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.45.128/26 handle="k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.430 [INFO][5406] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.45.136/26] block=192.168.45.128/26 handle="k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.430 [INFO][5406] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.136/26] handle="k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" host="ci-4344.1.0-a-c937e4b650" Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.430 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:21.455406 containerd[1891]: 2025-06-20 18:27:21.430 [INFO][5406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.45.136/26] IPv6=[] ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" HandleID="k8s-pod-network.f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Workload="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" Jun 20 18:27:21.456885 containerd[1891]: 2025-06-20 18:27:21.432 [INFO][5393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-c75mx" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0", GenerateName:"calico-apiserver-68bf6fb6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc05d7a5-d79d-4901-9734-e49ba246cf63", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bf6fb6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"", Pod:"calico-apiserver-68bf6fb6d-c75mx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4964372ec5b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:21.456885 containerd[1891]: 2025-06-20 18:27:21.432 [INFO][5393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.136/32] ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-c75mx" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" Jun 20 18:27:21.456885 containerd[1891]: 2025-06-20 18:27:21.432 [INFO][5393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4964372ec5b ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-c75mx" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" Jun 20 18:27:21.456885 containerd[1891]: 2025-06-20 18:27:21.439 [INFO][5393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-c75mx" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" Jun 20 18:27:21.456885 containerd[1891]: 2025-06-20 18:27:21.439 [INFO][5393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-c75mx" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0", GenerateName:"calico-apiserver-68bf6fb6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc05d7a5-d79d-4901-9734-e49ba246cf63", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bf6fb6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-c937e4b650", ContainerID:"f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3", Pod:"calico-apiserver-68bf6fb6d-c75mx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4964372ec5b", MAC:"32:4f:2d:e4:94:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:21.456885 containerd[1891]: 2025-06-20 18:27:21.453 [INFO][5393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" Namespace="calico-apiserver" Pod="calico-apiserver-68bf6fb6d-c75mx" WorkloadEndpoint="ci--4344.1.0--a--c937e4b650-k8s-calico--apiserver--68bf6fb6d--c75mx-eth0" Jun 20 18:27:21.522247 containerd[1891]: time="2025-06-20T18:27:21.522182465Z" level=info msg="connecting to shim f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3" address="unix:///run/containerd/s/1a8d720bb6d0a009f24d745846234c7bdaa5c6ed3e34c931c25debe7cd9960c1" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:21.558834 kubelet[3386]: I0620 18:27:21.558762 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-t4bch" podStartSLOduration=39.558749283 podStartE2EDuration="39.558749283s" podCreationTimestamp="2025-06-20 18:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:21.530946816 +0000 UTC m=+46.270563966" watchObservedRunningTime="2025-06-20 18:27:21.558749283 +0000 UTC m=+46.298366425" Jun 20 18:27:21.560226 systemd[1]: Started cri-containerd-f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3.scope - libcontainer container f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3. Jun 20 18:27:21.621584 containerd[1891]: time="2025-06-20T18:27:21.621549955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"33391d18be3999a31433172b0ebbfe3b0e2ff6e06ef56ebb5f83ff700a2bdeac\" pid:5445 exit_status:1 exited_at:{seconds:1750444041 nanos:621341855}" Jun 20 18:27:21.724955 containerd[1891]: time="2025-06-20T18:27:21.724846193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bf6fb6d-c75mx,Uid:dc05d7a5-d79d-4901-9734-e49ba246cf63,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3\"" Jun 20 18:27:22.345256 systemd-networkd[1484]: calida3d240ebd8: Gained IPv6LL Jun 20 18:27:22.921313 systemd-networkd[1484]: cali4964372ec5b: Gained IPv6LL Jun 20 18:27:23.398682 containerd[1891]: time="2025-06-20T18:27:23.398632137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:23.401730 containerd[1891]: time="2025-06-20T18:27:23.401688676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=44514850" Jun 20 18:27:23.413711 containerd[1891]: time="2025-06-20T18:27:23.413670145Z" level=info msg="ImageCreate event name:\"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:23.421764 containerd[1891]: time="2025-06-20T18:27:23.421460149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:23.421932 containerd[1891]: time="2025-06-20T18:27:23.421741955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 3.959820979s" Jun 20 18:27:23.422015 containerd[1891]: time="2025-06-20T18:27:23.422001312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 20 18:27:23.423243 containerd[1891]: time="2025-06-20T18:27:23.423222383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 18:27:23.424743 containerd[1891]: time="2025-06-20T18:27:23.424679811Z" level=info msg="CreateContainer within sandbox \"8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:27:23.461088 containerd[1891]: time="2025-06-20T18:27:23.460673123Z" level=info msg="Container 3e7c6e6f978b21044a58f54c7fdcac837887edd0e94fa2a72c54f3188aa668da: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:23.490904 containerd[1891]: time="2025-06-20T18:27:23.490875924Z" level=info msg="CreateContainer within sandbox \"8e7b4d803bb9cc5794b19da6916e33b278b7cd1aef2b0188133f4df542694402\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e7c6e6f978b21044a58f54c7fdcac837887edd0e94fa2a72c54f3188aa668da\"" Jun 20 18:27:23.491538 containerd[1891]: time="2025-06-20T18:27:23.491516736Z" level=info msg="StartContainer for \"3e7c6e6f978b21044a58f54c7fdcac837887edd0e94fa2a72c54f3188aa668da\"" Jun 20 18:27:23.492361 containerd[1891]: time="2025-06-20T18:27:23.492337416Z" level=info msg="connecting to shim 3e7c6e6f978b21044a58f54c7fdcac837887edd0e94fa2a72c54f3188aa668da" address="unix:///run/containerd/s/463b758389f27337d115094b999a2404c4bf40e9094dafe80e9eb39a280e8c3d" protocol=ttrpc version=3 Jun 20 18:27:23.534316 systemd[1]: Started cri-containerd-3e7c6e6f978b21044a58f54c7fdcac837887edd0e94fa2a72c54f3188aa668da.scope - libcontainer container 3e7c6e6f978b21044a58f54c7fdcac837887edd0e94fa2a72c54f3188aa668da. Jun 20 18:27:23.586943 containerd[1891]: time="2025-06-20T18:27:23.586768276Z" level=info msg="StartContainer for \"3e7c6e6f978b21044a58f54c7fdcac837887edd0e94fa2a72c54f3188aa668da\" returns successfully" Jun 20 18:27:24.529643 kubelet[3386]: I0620 18:27:24.529569 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68bf6fb6d-r7clf" podStartSLOduration=28.883332388 podStartE2EDuration="33.529554228s" podCreationTimestamp="2025-06-20 18:26:51 +0000 UTC" firstStartedPulling="2025-06-20 18:27:18.776820172 +0000 UTC m=+43.516437314" lastFinishedPulling="2025-06-20 18:27:23.423042012 +0000 UTC m=+48.162659154" observedRunningTime="2025-06-20 18:27:24.52914678 +0000 UTC m=+49.268763922" watchObservedRunningTime="2025-06-20 18:27:24.529554228 +0000 UTC m=+49.269171370" Jun 20 18:27:27.527635 containerd[1891]: time="2025-06-20T18:27:27.527583101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:27.530676 containerd[1891]: time="2025-06-20T18:27:27.530640643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=48129475" Jun 20 18:27:27.537555 containerd[1891]: time="2025-06-20T18:27:27.537512104Z" level=info msg="ImageCreate event name:\"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:27.542605 containerd[1891]: time="2025-06-20T18:27:27.542562267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:27.543141 containerd[1891]: time="2025-06-20T18:27:27.542840238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"49498684\" in 4.119473068s" Jun 20 18:27:27.543141 containerd[1891]: time="2025-06-20T18:27:27.542869150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\"" Jun 20 18:27:27.544236 containerd[1891]: time="2025-06-20T18:27:27.544102859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 18:27:27.559409 containerd[1891]: time="2025-06-20T18:27:27.559356092Z" level=info msg="CreateContainer within sandbox \"0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 18:27:27.585277 containerd[1891]: time="2025-06-20T18:27:27.585128663Z" level=info msg="Container f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:27.607260 containerd[1891]: time="2025-06-20T18:27:27.607225980Z" level=info msg="CreateContainer within sandbox \"0ff88cff1024ae990cf41c1fc5e86ebdc25aeb318e627cf051cc0f70a20685c5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\"" Jun 20 18:27:27.607780 containerd[1891]: time="2025-06-20T18:27:27.607761634Z" level=info msg="StartContainer for \"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\"" Jun 20 18:27:27.609080 containerd[1891]: time="2025-06-20T18:27:27.609026415Z" level=info msg="connecting to shim f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab" address="unix:///run/containerd/s/c1676b20f5d98c26157377e4609a6aeb71de1fd8a6edd8ef20ffb0268a7aebc0" protocol=ttrpc version=3 Jun 20 18:27:27.630186 systemd[1]: Started cri-containerd-f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab.scope - libcontainer container f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab. Jun 20 18:27:27.669505 containerd[1891]: time="2025-06-20T18:27:27.669417837Z" level=info msg="StartContainer for \"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" returns successfully" Jun 20 18:27:28.536104 kubelet[3386]: I0620 18:27:28.536041 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76b8dd894f-plbdq" podStartSLOduration=25.21242146 podStartE2EDuration="33.536024219s" podCreationTimestamp="2025-06-20 18:26:55 +0000 UTC" firstStartedPulling="2025-06-20 18:27:19.219813213 +0000 UTC m=+43.959430363" lastFinishedPulling="2025-06-20 18:27:27.54341598 +0000 UTC m=+52.283033122" observedRunningTime="2025-06-20 18:27:28.535793463 +0000 UTC m=+53.275410621" watchObservedRunningTime="2025-06-20 18:27:28.536024219 +0000 UTC m=+53.275641361" Jun 20 18:27:28.948756 containerd[1891]: time="2025-06-20T18:27:28.948303927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:28.951316 containerd[1891]: time="2025-06-20T18:27:28.951293336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8226240" Jun 20 18:27:28.957600 containerd[1891]: time="2025-06-20T18:27:28.957563961Z" level=info msg="ImageCreate event name:\"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:28.964092 containerd[1891]: time="2025-06-20T18:27:28.963941683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:28.964625 containerd[1891]: time="2025-06-20T18:27:28.964597440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"9595481\" in 1.420465493s" Jun 20 18:27:28.964721 containerd[1891]: time="2025-06-20T18:27:28.964706626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\"" Jun 20 18:27:28.965813 containerd[1891]: time="2025-06-20T18:27:28.965742342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 18:27:28.967710 containerd[1891]: time="2025-06-20T18:27:28.967364117Z" level=info msg="CreateContainer within sandbox \"90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 18:27:29.004800 containerd[1891]: time="2025-06-20T18:27:29.004100254Z" level=info msg="Container 996f65db1afc323382408c55de463493968bc6bb5bbea7398be88f6812c7840c: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:29.030358 containerd[1891]: time="2025-06-20T18:27:29.030328070Z" level=info msg="CreateContainer within sandbox \"90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"996f65db1afc323382408c55de463493968bc6bb5bbea7398be88f6812c7840c\"" Jun 20 18:27:29.031194 containerd[1891]: time="2025-06-20T18:27:29.031174534Z" level=info msg="StartContainer for \"996f65db1afc323382408c55de463493968bc6bb5bbea7398be88f6812c7840c\"" Jun 20 18:27:29.032433 containerd[1891]: time="2025-06-20T18:27:29.032401734Z" level=info msg="connecting to shim 996f65db1afc323382408c55de463493968bc6bb5bbea7398be88f6812c7840c" address="unix:///run/containerd/s/faa7c1ce2e22a109f9c191ac4e3e24959fe1f5a28ba8e564124594921e09c428" protocol=ttrpc version=3 Jun 20 18:27:29.053264 systemd[1]: Started cri-containerd-996f65db1afc323382408c55de463493968bc6bb5bbea7398be88f6812c7840c.scope - libcontainer container 996f65db1afc323382408c55de463493968bc6bb5bbea7398be88f6812c7840c. Jun 20 18:27:29.099111 containerd[1891]: time="2025-06-20T18:27:29.099044229Z" level=info msg="StartContainer for \"996f65db1afc323382408c55de463493968bc6bb5bbea7398be88f6812c7840c\" returns successfully" Jun 20 18:27:29.390495 containerd[1891]: time="2025-06-20T18:27:29.390133258Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:29.394082 containerd[1891]: time="2025-06-20T18:27:29.394054949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 18:27:29.395394 containerd[1891]: time="2025-06-20T18:27:29.395370206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 429.603984ms" Jun 20 18:27:29.395394 containerd[1891]: time="2025-06-20T18:27:29.395395343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 20 18:27:29.396295 containerd[1891]: time="2025-06-20T18:27:29.396231535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 18:27:29.397959 containerd[1891]: time="2025-06-20T18:27:29.397914663Z" level=info msg="CreateContainer within sandbox \"f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:27:29.431264 containerd[1891]: time="2025-06-20T18:27:29.430006031Z" level=info msg="Container 48118efea40be031e3adf0cb22cd487b6b75e9b7cde10b28cdac60cdf3077b06: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:29.455518 containerd[1891]: time="2025-06-20T18:27:29.455479128Z" level=info msg="CreateContainer within sandbox \"f8ae5ca34379c9bb9d382e5fca3b0e10d25c8b316abda97d9b7b515e5d4bfbb3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"48118efea40be031e3adf0cb22cd487b6b75e9b7cde10b28cdac60cdf3077b06\"" Jun 20 18:27:29.456179 containerd[1891]: time="2025-06-20T18:27:29.456159334Z" level=info msg="StartContainer for \"48118efea40be031e3adf0cb22cd487b6b75e9b7cde10b28cdac60cdf3077b06\"" Jun 20 18:27:29.462409 containerd[1891]: time="2025-06-20T18:27:29.462377349Z" level=info msg="connecting to shim 48118efea40be031e3adf0cb22cd487b6b75e9b7cde10b28cdac60cdf3077b06" address="unix:///run/containerd/s/1a8d720bb6d0a009f24d745846234c7bdaa5c6ed3e34c931c25debe7cd9960c1" protocol=ttrpc version=3 Jun 20 18:27:29.482190 systemd[1]: Started cri-containerd-48118efea40be031e3adf0cb22cd487b6b75e9b7cde10b28cdac60cdf3077b06.scope - libcontainer container 48118efea40be031e3adf0cb22cd487b6b75e9b7cde10b28cdac60cdf3077b06. Jun 20 18:27:29.526667 kubelet[3386]: I0620 18:27:29.526568 3386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:27:29.536498 containerd[1891]: time="2025-06-20T18:27:29.536464083Z" level=info msg="StartContainer for \"48118efea40be031e3adf0cb22cd487b6b75e9b7cde10b28cdac60cdf3077b06\" returns successfully" Jun 20 18:27:30.548508 kubelet[3386]: I0620 18:27:30.548419 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68bf6fb6d-c75mx" podStartSLOduration=31.878484974 podStartE2EDuration="39.548404496s" podCreationTimestamp="2025-06-20 18:26:51 +0000 UTC" firstStartedPulling="2025-06-20 18:27:21.726167858 +0000 UTC m=+46.465785000" lastFinishedPulling="2025-06-20 18:27:29.39608738 +0000 UTC m=+54.135704522" observedRunningTime="2025-06-20 18:27:30.548224557 +0000 UTC m=+55.287841707" watchObservedRunningTime="2025-06-20 18:27:30.548404496 +0000 UTC m=+55.288021638" Jun 20 18:27:30.797960 containerd[1891]: time="2025-06-20T18:27:30.796883675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:30.801279 containerd[1891]: time="2025-06-20T18:27:30.801092500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=13749925" Jun 20 18:27:30.807261 containerd[1891]: time="2025-06-20T18:27:30.807239618Z" level=info msg="ImageCreate event name:\"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:30.814117 containerd[1891]: time="2025-06-20T18:27:30.813126259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:30.814615 containerd[1891]: time="2025-06-20T18:27:30.814591495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"15119118\" in 1.418335504s" Jun 20 18:27:30.814700 containerd[1891]: time="2025-06-20T18:27:30.814687073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\"" Jun 20 18:27:30.819108 containerd[1891]: time="2025-06-20T18:27:30.818104930Z" level=info msg="CreateContainer within sandbox \"90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 18:27:30.845592 containerd[1891]: time="2025-06-20T18:27:30.845569002Z" level=info msg="Container a4a6f745c237c478305d8027c607430c3b908ac40aedfbbd3b2eab2a8115ca5d: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:30.867601 containerd[1891]: time="2025-06-20T18:27:30.867553568Z" level=info msg="CreateContainer within sandbox \"90273bcf1488160b3d32803074d2bc836d4880bdb37b32f878e3023c6448a222\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a4a6f745c237c478305d8027c607430c3b908ac40aedfbbd3b2eab2a8115ca5d\"" Jun 20 18:27:30.870184 containerd[1891]: time="2025-06-20T18:27:30.868360383Z" level=info msg="StartContainer for \"a4a6f745c237c478305d8027c607430c3b908ac40aedfbbd3b2eab2a8115ca5d\"" Jun 20 18:27:30.870573 containerd[1891]: time="2025-06-20T18:27:30.870552769Z" level=info msg="connecting to shim a4a6f745c237c478305d8027c607430c3b908ac40aedfbbd3b2eab2a8115ca5d" address="unix:///run/containerd/s/faa7c1ce2e22a109f9c191ac4e3e24959fe1f5a28ba8e564124594921e09c428" protocol=ttrpc version=3 Jun 20 18:27:30.896388 systemd[1]: Started cri-containerd-a4a6f745c237c478305d8027c607430c3b908ac40aedfbbd3b2eab2a8115ca5d.scope - libcontainer container a4a6f745c237c478305d8027c607430c3b908ac40aedfbbd3b2eab2a8115ca5d. Jun 20 18:27:30.956907 containerd[1891]: time="2025-06-20T18:27:30.956866890Z" level=info msg="StartContainer for \"a4a6f745c237c478305d8027c607430c3b908ac40aedfbbd3b2eab2a8115ca5d\" returns successfully" Jun 20 18:27:31.445008 kubelet[3386]: I0620 18:27:31.444788 3386 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 18:27:31.445008 kubelet[3386]: I0620 18:27:31.444825 3386 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 18:27:32.633486 kubelet[3386]: I0620 18:27:32.633444 3386 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:27:32.672559 containerd[1891]: time="2025-06-20T18:27:32.672347139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"eff0e0212ecc6619a7b0273e45549b2beb739bdd7dff4fec064a6c053ee19496\" pid:5722 exited_at:{seconds:1750444052 nanos:670862614}" Jun 20 18:27:32.694636 kubelet[3386]: I0620 18:27:32.694590 3386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zh6nt" podStartSLOduration=27.221503697 podStartE2EDuration="38.694574365s" podCreationTimestamp="2025-06-20 18:26:54 +0000 UTC" firstStartedPulling="2025-06-20 18:27:19.34221276 +0000 UTC m=+44.081829902" lastFinishedPulling="2025-06-20 18:27:30.815283428 +0000 UTC m=+55.554900570" observedRunningTime="2025-06-20 18:27:31.553390047 +0000 UTC m=+56.293007205" watchObservedRunningTime="2025-06-20 18:27:32.694574365 +0000 UTC m=+57.434191515" Jun 20 18:27:32.717661 containerd[1891]: time="2025-06-20T18:27:32.717616280Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"fb6a22a69fdaa9eb4f278dba4a897a499bd91265817bb4773a4c9989762fd119\" pid:5749 exited_at:{seconds:1750444052 nanos:716877170}" Jun 20 18:27:36.033992 containerd[1891]: time="2025-06-20T18:27:36.033944466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"e4ee39b04ded50f8fe15603b7c179fb4fe038c25256d37ec17a966e2a5983d4c\" pid:5773 exited_at:{seconds:1750444056 nanos:33532379}" Jun 20 18:27:46.300250 containerd[1891]: time="2025-06-20T18:27:46.300191060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\" id:\"784527a28561c08847152b683abaa2a41cc1442ba687091e8d286d2a65e804e2\" pid:5804 exited_at:{seconds:1750444066 nanos:299285739}" Jun 20 18:27:49.377525 containerd[1891]: time="2025-06-20T18:27:49.377425613Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"fb1d2c70ba446da3c873ed485d5832b4e1de05c5cdfa62653f3bf0cb7fcbb905\" pid:5831 exited_at:{seconds:1750444069 nanos:377192296}" Jun 20 18:28:00.491921 containerd[1891]: time="2025-06-20T18:28:00.491872170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"a3ca714ca288499b972cf364010ebab2d206d739305340ab854b192435139a4b\" pid:5861 exited_at:{seconds:1750444080 nanos:491608173}" Jun 20 18:28:02.661587 containerd[1891]: time="2025-06-20T18:28:02.661522473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"a84116db24f1e100924817e044679cec2dd7c0ee96edf95d1b13e39dce96761e\" pid:5884 exited_at:{seconds:1750444082 nanos:661314269}" Jun 20 18:28:05.996018 containerd[1891]: time="2025-06-20T18:28:05.995435079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"757ef6cc40c67afa808095aff375d327434118bf00d6b376d6837c2d2af015b9\" pid:5905 exited_at:{seconds:1750444085 nanos:994777611}" Jun 20 18:28:16.289059 containerd[1891]: time="2025-06-20T18:28:16.289018067Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\" id:\"d84951511ac32dfe15ae6e219ec0b1cb5d3f7f46e9a4b9ad5753825c570aeb3a\" pid:5929 exited_at:{seconds:1750444096 nanos:288747478}" Jun 20 18:28:32.660294 containerd[1891]: time="2025-06-20T18:28:32.660208472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"05311758b7a19c079d612371669750d694249556d163dae2a1ddc8f7a225152e\" pid:5954 exited_at:{seconds:1750444112 nanos:659770304}" Jun 20 18:28:35.989609 containerd[1891]: time="2025-06-20T18:28:35.989547325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"4bf23c9f22f6fd0584f7168724412b37d8bb10d838155221c1f3a2ed3b592a68\" pid:5983 exited_at:{seconds:1750444115 nanos:989210823}" Jun 20 18:28:46.283501 containerd[1891]: time="2025-06-20T18:28:46.283460691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\" id:\"5bfe5332fa59cb26f2fb6d4f8517cd0c28442daad0d7cbf59d16f9bf3f534e7b\" pid:6016 exited_at:{seconds:1750444126 nanos:283016539}" Jun 20 18:28:49.371271 containerd[1891]: time="2025-06-20T18:28:49.371218155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"24b426f2703cb633b0a86ab9101b7a66005e7b6d0aa108fad5af0f804c01bc9a\" pid:6054 exited_at:{seconds:1750444129 nanos:370991838}" Jun 20 18:28:53.610997 systemd[1]: Started sshd@7-10.200.20.16:22-10.200.16.10:46910.service - OpenSSH per-connection server daemon (10.200.16.10:46910). Jun 20 18:28:54.100280 sshd[6068]: Accepted publickey for core from 10.200.16.10 port 46910 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:28:54.101632 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:28:54.105824 systemd-logind[1872]: New session 10 of user core. Jun 20 18:28:54.109182 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:28:54.541330 sshd[6070]: Connection closed by 10.200.16.10 port 46910 Jun 20 18:28:54.541173 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Jun 20 18:28:54.545641 systemd[1]: sshd@7-10.200.20.16:22-10.200.16.10:46910.service: Deactivated successfully. Jun 20 18:28:54.547752 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:28:54.550870 systemd-logind[1872]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:28:54.551739 systemd-logind[1872]: Removed session 10. Jun 20 18:28:59.623534 systemd[1]: Started sshd@8-10.200.20.16:22-10.200.16.10:54124.service - OpenSSH per-connection server daemon (10.200.16.10:54124). Jun 20 18:29:00.078157 sshd[6084]: Accepted publickey for core from 10.200.16.10 port 54124 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:00.079289 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:00.083118 systemd-logind[1872]: New session 11 of user core. Jun 20 18:29:00.088205 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:29:00.451450 sshd[6086]: Connection closed by 10.200.16.10 port 54124 Jun 20 18:29:00.452097 sshd-session[6084]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:00.455252 systemd-logind[1872]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:29:00.455547 systemd[1]: sshd@8-10.200.20.16:22-10.200.16.10:54124.service: Deactivated successfully. Jun 20 18:29:00.457755 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:29:00.459583 systemd-logind[1872]: Removed session 11. Jun 20 18:29:00.487450 containerd[1891]: time="2025-06-20T18:29:00.487412546Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"f4e51c47870170c294a1818472edb205bf7edcdc33ebbba2046c245c2bbf36b6\" pid:6110 exited_at:{seconds:1750444140 nanos:487236383}" Jun 20 18:29:02.656165 containerd[1891]: time="2025-06-20T18:29:02.656122220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"ec936927a857e20fad9c5fcaf646b6651f1a80ec983811cce7b107201d17792f\" pid:6133 exited_at:{seconds:1750444142 nanos:655808358}" Jun 20 18:29:05.539854 systemd[1]: Started sshd@9-10.200.20.16:22-10.200.16.10:54134.service - OpenSSH per-connection server daemon (10.200.16.10:54134). Jun 20 18:29:05.990756 containerd[1891]: time="2025-06-20T18:29:05.990714492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"f08f804d3418583cc13b804e6fbef0bf9e2ddf7920f64691244030c4f056e53e\" pid:6157 exited_at:{seconds:1750444145 nanos:990359893}" Jun 20 18:29:06.029964 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 54134 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:06.031021 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:06.035004 systemd-logind[1872]: New session 12 of user core. Jun 20 18:29:06.040180 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:29:06.429333 sshd[6166]: Connection closed by 10.200.16.10 port 54134 Jun 20 18:29:06.429822 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:06.432858 systemd[1]: sshd@9-10.200.20.16:22-10.200.16.10:54134.service: Deactivated successfully. Jun 20 18:29:06.434689 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:29:06.435383 systemd-logind[1872]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:29:06.436689 systemd-logind[1872]: Removed session 12. Jun 20 18:29:06.511124 systemd[1]: Started sshd@10-10.200.20.16:22-10.200.16.10:54138.service - OpenSSH per-connection server daemon (10.200.16.10:54138). Jun 20 18:29:06.964293 sshd[6179]: Accepted publickey for core from 10.200.16.10 port 54138 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:06.965422 sshd-session[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:06.969255 systemd-logind[1872]: New session 13 of user core. Jun 20 18:29:06.980375 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:29:07.355372 sshd[6181]: Connection closed by 10.200.16.10 port 54138 Jun 20 18:29:07.355703 sshd-session[6179]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:07.359580 systemd[1]: sshd@10-10.200.20.16:22-10.200.16.10:54138.service: Deactivated successfully. Jun 20 18:29:07.361444 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:29:07.362198 systemd-logind[1872]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:29:07.363841 systemd-logind[1872]: Removed session 13. Jun 20 18:29:07.440204 systemd[1]: Started sshd@11-10.200.20.16:22-10.200.16.10:54140.service - OpenSSH per-connection server daemon (10.200.16.10:54140). Jun 20 18:29:07.909500 sshd[6191]: Accepted publickey for core from 10.200.16.10 port 54140 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:07.910565 sshd-session[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:07.914337 systemd-logind[1872]: New session 14 of user core. Jun 20 18:29:07.920198 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:29:08.305053 sshd[6193]: Connection closed by 10.200.16.10 port 54140 Jun 20 18:29:08.305492 sshd-session[6191]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:08.308831 systemd[1]: sshd@11-10.200.20.16:22-10.200.16.10:54140.service: Deactivated successfully. Jun 20 18:29:08.310390 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:29:08.311003 systemd-logind[1872]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:29:08.312247 systemd-logind[1872]: Removed session 14. Jun 20 18:29:13.387471 systemd[1]: Started sshd@12-10.200.20.16:22-10.200.16.10:39422.service - OpenSSH per-connection server daemon (10.200.16.10:39422). Jun 20 18:29:13.841475 sshd[6212]: Accepted publickey for core from 10.200.16.10 port 39422 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:13.842541 sshd-session[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:13.846276 systemd-logind[1872]: New session 15 of user core. Jun 20 18:29:13.851291 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:29:14.215295 sshd[6214]: Connection closed by 10.200.16.10 port 39422 Jun 20 18:29:14.215512 sshd-session[6212]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:14.218919 systemd-logind[1872]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:29:14.219163 systemd[1]: sshd@12-10.200.20.16:22-10.200.16.10:39422.service: Deactivated successfully. Jun 20 18:29:14.220515 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:29:14.221904 systemd-logind[1872]: Removed session 15. Jun 20 18:29:16.283264 containerd[1891]: time="2025-06-20T18:29:16.283146852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\" id:\"f7cba7a39fa92dabe481c9a3cf0fd27ebd2e06c58cf02b7c39250e21edccd8bd\" pid:6237 exited_at:{seconds:1750444156 nanos:282594002}" Jun 20 18:29:19.300788 systemd[1]: Started sshd@13-10.200.20.16:22-10.200.16.10:55434.service - OpenSSH per-connection server daemon (10.200.16.10:55434). Jun 20 18:29:19.771120 sshd[6249]: Accepted publickey for core from 10.200.16.10 port 55434 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:19.772111 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:19.776116 systemd-logind[1872]: New session 16 of user core. Jun 20 18:29:19.780174 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:29:20.161319 sshd[6251]: Connection closed by 10.200.16.10 port 55434 Jun 20 18:29:20.161804 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:20.165279 systemd[1]: sshd@13-10.200.20.16:22-10.200.16.10:55434.service: Deactivated successfully. Jun 20 18:29:20.167102 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:29:20.167857 systemd-logind[1872]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:29:20.169056 systemd-logind[1872]: Removed session 16. Jun 20 18:29:25.252057 systemd[1]: Started sshd@14-10.200.20.16:22-10.200.16.10:55448.service - OpenSSH per-connection server daemon (10.200.16.10:55448). Jun 20 18:29:25.708434 sshd[6263]: Accepted publickey for core from 10.200.16.10 port 55448 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:25.709631 sshd-session[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:25.714353 systemd-logind[1872]: New session 17 of user core. Jun 20 18:29:25.720407 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:29:26.100588 sshd[6265]: Connection closed by 10.200.16.10 port 55448 Jun 20 18:29:26.103426 sshd-session[6263]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:26.109834 systemd[1]: sshd@14-10.200.20.16:22-10.200.16.10:55448.service: Deactivated successfully. Jun 20 18:29:26.113447 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:29:26.115958 systemd-logind[1872]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:29:26.117651 systemd-logind[1872]: Removed session 17. Jun 20 18:29:31.183528 systemd[1]: Started sshd@15-10.200.20.16:22-10.200.16.10:46290.service - OpenSSH per-connection server daemon (10.200.16.10:46290). Jun 20 18:29:31.634652 sshd[6277]: Accepted publickey for core from 10.200.16.10 port 46290 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:31.635872 sshd-session[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:31.639841 systemd-logind[1872]: New session 18 of user core. Jun 20 18:29:31.645198 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:29:32.017661 sshd[6280]: Connection closed by 10.200.16.10 port 46290 Jun 20 18:29:32.018390 sshd-session[6277]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:32.020882 systemd[1]: sshd@15-10.200.20.16:22-10.200.16.10:46290.service: Deactivated successfully. Jun 20 18:29:32.022978 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:29:32.024862 systemd-logind[1872]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:29:32.026559 systemd-logind[1872]: Removed session 18. Jun 20 18:29:32.104898 systemd[1]: Started sshd@16-10.200.20.16:22-10.200.16.10:46292.service - OpenSSH per-connection server daemon (10.200.16.10:46292). Jun 20 18:29:32.591755 sshd[6291]: Accepted publickey for core from 10.200.16.10 port 46292 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:32.592869 sshd-session[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:32.596701 systemd-logind[1872]: New session 19 of user core. Jun 20 18:29:32.600198 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:29:32.661588 containerd[1891]: time="2025-06-20T18:29:32.661535735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"c75163b649baa6e0e019a26986437183ea2076bb6478dd3b487978b0d9915687\" pid:6305 exited_at:{seconds:1750444172 nanos:661277650}" Jun 20 18:29:33.046179 sshd[6293]: Connection closed by 10.200.16.10 port 46292 Jun 20 18:29:33.046960 sshd-session[6291]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:33.050283 systemd[1]: sshd@16-10.200.20.16:22-10.200.16.10:46292.service: Deactivated successfully. Jun 20 18:29:33.051764 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:29:33.053645 systemd-logind[1872]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:29:33.055455 systemd-logind[1872]: Removed session 19. Jun 20 18:29:33.133410 systemd[1]: Started sshd@17-10.200.20.16:22-10.200.16.10:46294.service - OpenSSH per-connection server daemon (10.200.16.10:46294). Jun 20 18:29:33.587559 sshd[6324]: Accepted publickey for core from 10.200.16.10 port 46294 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:33.588688 sshd-session[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:33.594276 systemd-logind[1872]: New session 20 of user core. Jun 20 18:29:33.599201 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:29:35.425377 sshd[6326]: Connection closed by 10.200.16.10 port 46294 Jun 20 18:29:35.425294 sshd-session[6324]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:35.429742 systemd-logind[1872]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:29:35.430479 systemd[1]: sshd@17-10.200.20.16:22-10.200.16.10:46294.service: Deactivated successfully. Jun 20 18:29:35.433426 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:29:35.433678 systemd[1]: session-20.scope: Consumed 308ms CPU time, 79.4M memory peak. Jun 20 18:29:35.435435 systemd-logind[1872]: Removed session 20. Jun 20 18:29:35.510034 systemd[1]: Started sshd@18-10.200.20.16:22-10.200.16.10:46298.service - OpenSSH per-connection server daemon (10.200.16.10:46298). Jun 20 18:29:35.972755 sshd[6346]: Accepted publickey for core from 10.200.16.10 port 46298 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:35.974587 sshd-session[6346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:35.981505 systemd-logind[1872]: New session 21 of user core. Jun 20 18:29:35.985829 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:29:36.082422 containerd[1891]: time="2025-06-20T18:29:36.082354854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"88dafca9a739e8de06322c55c290290eb80e14e0c6657061576d0c181fd59670\" pid:6360 exited_at:{seconds:1750444176 nanos:82026576}" Jun 20 18:29:36.455153 sshd[6366]: Connection closed by 10.200.16.10 port 46298 Jun 20 18:29:36.455814 sshd-session[6346]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:36.460837 systemd[1]: sshd@18-10.200.20.16:22-10.200.16.10:46298.service: Deactivated successfully. Jun 20 18:29:36.464789 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:29:36.466633 systemd-logind[1872]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:29:36.469674 systemd-logind[1872]: Removed session 21. Jun 20 18:29:36.539427 systemd[1]: Started sshd@19-10.200.20.16:22-10.200.16.10:46312.service - OpenSSH per-connection server daemon (10.200.16.10:46312). Jun 20 18:29:36.995187 sshd[6380]: Accepted publickey for core from 10.200.16.10 port 46312 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:36.996777 sshd-session[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:37.000855 systemd-logind[1872]: New session 22 of user core. Jun 20 18:29:37.007187 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:29:37.380360 sshd[6382]: Connection closed by 10.200.16.10 port 46312 Jun 20 18:29:37.380923 sshd-session[6380]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:37.385233 systemd[1]: sshd@19-10.200.20.16:22-10.200.16.10:46312.service: Deactivated successfully. Jun 20 18:29:37.387850 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:29:37.389744 systemd-logind[1872]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:29:37.392327 systemd-logind[1872]: Removed session 22. Jun 20 18:29:42.473590 systemd[1]: Started sshd@20-10.200.20.16:22-10.200.16.10:40870.service - OpenSSH per-connection server daemon (10.200.16.10:40870). Jun 20 18:29:42.963085 sshd[6397]: Accepted publickey for core from 10.200.16.10 port 40870 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:42.964160 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:42.968202 systemd-logind[1872]: New session 23 of user core. Jun 20 18:29:42.974189 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:29:43.358117 sshd[6401]: Connection closed by 10.200.16.10 port 40870 Jun 20 18:29:43.358584 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:43.361274 systemd[1]: sshd@20-10.200.20.16:22-10.200.16.10:40870.service: Deactivated successfully. Jun 20 18:29:43.363036 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:29:43.363884 systemd-logind[1872]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:29:43.365431 systemd-logind[1872]: Removed session 23. Jun 20 18:29:46.287904 containerd[1891]: time="2025-06-20T18:29:46.287859767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4310cb5f4c2bda439a4b632f7f212cf9c89ba1250eeff3e6916cf637891754b4\" id:\"cfbd8c62a3579bc70327fc8148830d22ea7275d32cf3c0ec9157b6ba0d737e79\" pid:6423 exited_at:{seconds:1750444186 nanos:287639227}" Jun 20 18:29:48.439446 systemd[1]: Started sshd@21-10.200.20.16:22-10.200.16.10:40884.service - OpenSSH per-connection server daemon (10.200.16.10:40884). Jun 20 18:29:48.892312 sshd[6436]: Accepted publickey for core from 10.200.16.10 port 40884 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:48.893420 sshd-session[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:48.897181 systemd-logind[1872]: New session 24 of user core. Jun 20 18:29:48.905388 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:29:49.265521 sshd[6438]: Connection closed by 10.200.16.10 port 40884 Jun 20 18:29:49.264999 sshd-session[6436]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:49.268417 systemd-logind[1872]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:29:49.268952 systemd[1]: sshd@21-10.200.20.16:22-10.200.16.10:40884.service: Deactivated successfully. Jun 20 18:29:49.270751 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:29:49.272986 systemd-logind[1872]: Removed session 24. Jun 20 18:29:49.371624 containerd[1891]: time="2025-06-20T18:29:49.371485921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"f426a0ab43336d9472e6691d8b7d43b747a148981b75ce94626c3294fd975bd5\" pid:6460 exited_at:{seconds:1750444189 nanos:371285885}" Jun 20 18:29:54.351978 systemd[1]: Started sshd@22-10.200.20.16:22-10.200.16.10:53428.service - OpenSSH per-connection server daemon (10.200.16.10:53428). Jun 20 18:29:54.809887 sshd[6476]: Accepted publickey for core from 10.200.16.10 port 53428 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:54.810959 sshd-session[6476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:54.814457 systemd-logind[1872]: New session 25 of user core. Jun 20 18:29:54.821190 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:29:55.183538 sshd[6478]: Connection closed by 10.200.16.10 port 53428 Jun 20 18:29:55.183326 sshd-session[6476]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:55.186839 systemd-logind[1872]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:29:55.186911 systemd[1]: sshd@22-10.200.20.16:22-10.200.16.10:53428.service: Deactivated successfully. Jun 20 18:29:55.189589 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:29:55.191217 systemd-logind[1872]: Removed session 25. Jun 20 18:30:00.268815 systemd[1]: Started sshd@23-10.200.20.16:22-10.200.16.10:51302.service - OpenSSH per-connection server daemon (10.200.16.10:51302). Jun 20 18:30:00.489712 containerd[1891]: time="2025-06-20T18:30:00.489667763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"0d5ab5394945053f948491ec57d5aad69c87cfb623b213be31553b34cd067038\" pid:6504 exited_at:{seconds:1750444200 nanos:489486376}" Jun 20 18:30:00.720844 sshd[6490]: Accepted publickey for core from 10.200.16.10 port 51302 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:00.721899 sshd-session[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:00.726034 systemd-logind[1872]: New session 26 of user core. Jun 20 18:30:00.733193 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:30:01.100609 sshd[6514]: Connection closed by 10.200.16.10 port 51302 Jun 20 18:30:01.101338 sshd-session[6490]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:01.103925 systemd[1]: sshd@23-10.200.20.16:22-10.200.16.10:51302.service: Deactivated successfully. Jun 20 18:30:01.105564 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:30:01.108476 systemd-logind[1872]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:30:01.109660 systemd-logind[1872]: Removed session 26. Jun 20 18:30:02.658624 containerd[1891]: time="2025-06-20T18:30:02.658584039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f211786288d26df1610292ea78cadaf2ddfdd069e7a7607261b41eeef0d713ab\" id:\"279280204861a2071c020dacca06f0137eb793494b87b232b487f17ff56b5c05\" pid:6540 exited_at:{seconds:1750444202 nanos:658157095}" Jun 20 18:30:05.991628 containerd[1891]: time="2025-06-20T18:30:05.991586221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae378484235c742af6c7743e9284025498c19475a7d304f210739b77b4aa0bd5\" id:\"912e5c9967310ff3b3f6e04ca75e28427d899aef9c1e29280240108ce3f8b64c\" pid:6561 exited_at:{seconds:1750444205 nanos:991211158}" Jun 20 18:30:06.192719 systemd[1]: Started sshd@24-10.200.20.16:22-10.200.16.10:51318.service - OpenSSH per-connection server daemon (10.200.16.10:51318). Jun 20 18:30:06.651231 sshd[6571]: Accepted publickey for core from 10.200.16.10 port 51318 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:06.652325 sshd-session[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:06.656850 systemd-logind[1872]: New session 27 of user core. Jun 20 18:30:06.663196 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:30:07.027233 sshd[6573]: Connection closed by 10.200.16.10 port 51318 Jun 20 18:30:07.027994 sshd-session[6571]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:07.031042 systemd[1]: sshd@24-10.200.20.16:22-10.200.16.10:51318.service: Deactivated successfully. Jun 20 18:30:07.032936 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:30:07.033872 systemd-logind[1872]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:30:07.036006 systemd-logind[1872]: Removed session 27.