Jun 20 18:25:17.084977 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jun 20 18:25:17.084995 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jun 20 16:58:52 -00 2025 Jun 20 18:25:17.085002 kernel: KASLR enabled Jun 20 18:25:17.085005 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 20 18:25:17.085010 kernel: printk: legacy bootconsole [pl11] enabled Jun 20 18:25:17.085014 kernel: efi: EFI v2.7 by EDK II Jun 20 18:25:17.085019 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e018 RNG=0x3fd5f998 MEMRESERVE=0x3e471598 Jun 20 18:25:17.085023 kernel: random: crng init done Jun 20 18:25:17.085027 kernel: secureboot: Secure boot disabled Jun 20 18:25:17.085031 kernel: ACPI: Early table checksum verification disabled Jun 20 18:25:17.085035 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jun 20 18:25:17.085038 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085042 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085047 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jun 20 18:25:17.085052 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085056 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085060 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085065 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085069 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085073 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085078 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 20 18:25:17.085082 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:25:17.085086 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 20 18:25:17.085090 kernel: ACPI: Use ACPI SPCR as default console: Yes Jun 20 18:25:17.085094 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 20 18:25:17.085098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jun 20 18:25:17.085102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jun 20 18:25:17.085106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 20 18:25:17.085110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 20 18:25:17.085115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 20 18:25:17.085119 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 20 18:25:17.085123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 20 18:25:17.085127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 20 18:25:17.085132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 20 18:25:17.085136 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 20 18:25:17.085140 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 20 18:25:17.085144 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jun 20 18:25:17.085148 kernel: NODE_DATA(0) allocated [mem 0x1bf7fddc0-0x1bf804fff] Jun 20 18:25:17.085152 kernel: Zone ranges: Jun 20 18:25:17.085156 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 20 18:25:17.085163 kernel: DMA32 empty Jun 20 18:25:17.085167 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:25:17.085172 kernel: Device empty Jun 20 18:25:17.085176 kernel: Movable zone start for each node Jun 20 18:25:17.085180 kernel: Early memory node ranges Jun 20 18:25:17.085185 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 20 18:25:17.085190 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jun 20 18:25:17.085194 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jun 20 18:25:17.085198 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jun 20 18:25:17.085203 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jun 20 18:25:17.085207 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jun 20 18:25:17.085211 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jun 20 18:25:17.085215 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jun 20 18:25:17.085220 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 20 18:25:17.085224 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 20 18:25:17.085228 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 20 18:25:17.085233 kernel: psci: probing for conduit method from ACPI. Jun 20 18:25:17.085238 kernel: psci: PSCIv1.1 detected in firmware. Jun 20 18:25:17.085242 kernel: psci: Using standard PSCI v0.2 function IDs Jun 20 18:25:17.085246 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 20 18:25:17.085250 kernel: psci: SMC Calling Convention v1.4 Jun 20 18:25:17.085255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 20 18:25:17.085259 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 20 18:25:17.085263 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jun 20 18:25:17.085268 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jun 20 18:25:17.085272 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 20 18:25:17.085276 kernel: Detected PIPT I-cache on CPU0 Jun 20 18:25:17.085281 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jun 20 18:25:17.085286 kernel: CPU features: detected: GIC system register CPU interface Jun 20 18:25:17.085290 kernel: CPU features: detected: Spectre-v4 Jun 20 18:25:17.085294 kernel: CPU features: detected: Spectre-BHB Jun 20 18:25:17.085299 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 20 18:25:17.085303 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 20 18:25:17.085307 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jun 20 18:25:17.085311 kernel: CPU features: detected: SSBS not fully self-synchronizing Jun 20 18:25:17.085316 kernel: alternatives: applying boot alternatives Jun 20 18:25:17.085321 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:25:17.085326 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:25:17.085330 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:25:17.085335 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:25:17.085339 kernel: Fallback order for Node 0: 0 Jun 20 18:25:17.085344 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jun 20 18:25:17.085348 kernel: Policy zone: Normal Jun 20 18:25:17.085352 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:25:17.085357 kernel: software IO TLB: area num 2. Jun 20 18:25:17.085361 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jun 20 18:25:17.085365 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:25:17.085369 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:25:17.085374 kernel: rcu: RCU event tracing is enabled. Jun 20 18:25:17.085379 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:25:17.085384 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:25:17.085388 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:25:17.085393 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:25:17.085397 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:25:17.085402 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:25:17.085406 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:25:17.085410 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 20 18:25:17.085414 kernel: GICv3: 960 SPIs implemented Jun 20 18:25:17.085419 kernel: GICv3: 0 Extended SPIs implemented Jun 20 18:25:17.085423 kernel: Root IRQ handler: gic_handle_irq Jun 20 18:25:17.085427 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jun 20 18:25:17.085432 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jun 20 18:25:17.085437 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 20 18:25:17.085441 kernel: ITS: No ITS available, not enabling LPIs Jun 20 18:25:17.085446 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:25:17.085450 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jun 20 18:25:17.085455 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 18:25:17.085459 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jun 20 18:25:17.085463 kernel: Console: colour dummy device 80x25 Jun 20 18:25:17.085468 kernel: printk: legacy console [tty1] enabled Jun 20 18:25:17.085473 kernel: ACPI: Core revision 20240827 Jun 20 18:25:17.085477 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jun 20 18:25:17.085482 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:25:17.085487 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 18:25:17.085491 kernel: landlock: Up and running. Jun 20 18:25:17.085496 kernel: SELinux: Initializing. Jun 20 18:25:17.085500 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:25:17.085505 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 18:25:17.085513 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x1a0000e, misc 0x31e1 Jun 20 18:25:17.085518 kernel: Hyper-V: Host Build 10.0.26100.1255-1-0 Jun 20 18:25:17.085523 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 18:25:17.085528 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:25:17.085532 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:25:17.085537 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 18:25:17.085543 kernel: Remapping and enabling EFI services. Jun 20 18:25:17.085547 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:25:17.085565 kernel: Detected PIPT I-cache on CPU1 Jun 20 18:25:17.085570 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 20 18:25:17.085575 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jun 20 18:25:17.085580 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:25:17.085585 kernel: SMP: Total of 2 processors activated. Jun 20 18:25:17.085590 kernel: CPU: All CPU(s) started at EL1 Jun 20 18:25:17.085594 kernel: CPU features: detected: 32-bit EL0 Support Jun 20 18:25:17.085599 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 20 18:25:17.085604 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 20 18:25:17.085609 kernel: CPU features: detected: Common not Private translations Jun 20 18:25:17.085613 kernel: CPU features: detected: CRC32 instructions Jun 20 18:25:17.085618 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jun 20 18:25:17.085624 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 20 18:25:17.085628 kernel: CPU features: detected: LSE atomic instructions Jun 20 18:25:17.085633 kernel: CPU features: detected: Privileged Access Never Jun 20 18:25:17.085638 kernel: CPU features: detected: Speculation barrier (SB) Jun 20 18:25:17.085642 kernel: CPU features: detected: TLB range maintenance instructions Jun 20 18:25:17.085647 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jun 20 18:25:17.085652 kernel: CPU features: detected: Scalable Vector Extension Jun 20 18:25:17.085656 kernel: alternatives: applying system-wide alternatives Jun 20 18:25:17.085661 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jun 20 18:25:17.085667 kernel: SVE: maximum available vector length 16 bytes per vector Jun 20 18:25:17.085671 kernel: SVE: default vector length 16 bytes per vector Jun 20 18:25:17.085676 kernel: Memory: 3976112K/4194160K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 213432K reserved, 0K cma-reserved) Jun 20 18:25:17.085681 kernel: devtmpfs: initialized Jun 20 18:25:17.085686 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:25:17.085690 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:25:17.085695 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 20 18:25:17.085700 kernel: 0 pages in range for non-PLT usage Jun 20 18:25:17.085704 kernel: 508544 pages in range for PLT usage Jun 20 18:25:17.085710 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:25:17.085715 kernel: SMBIOS 3.1.0 present. Jun 20 18:25:17.085719 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jun 20 18:25:17.085724 kernel: DMI: Memory slots populated: 2/2 Jun 20 18:25:17.085729 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:25:17.085733 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 20 18:25:17.085738 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 20 18:25:17.085743 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 20 18:25:17.085748 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:25:17.085753 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jun 20 18:25:17.085758 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:25:17.085763 kernel: cpuidle: using governor menu Jun 20 18:25:17.085767 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 20 18:25:17.085772 kernel: ASID allocator initialised with 32768 entries Jun 20 18:25:17.085777 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:25:17.085781 kernel: Serial: AMBA PL011 UART driver Jun 20 18:25:17.085786 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:25:17.085791 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:25:17.085796 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 20 18:25:17.085801 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 20 18:25:17.085806 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:25:17.085810 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:25:17.085815 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 20 18:25:17.085820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 20 18:25:17.085824 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:25:17.085829 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:25:17.085834 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:25:17.085839 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:25:17.085844 kernel: ACPI: Interpreter enabled Jun 20 18:25:17.085848 kernel: ACPI: Using GIC for interrupt routing Jun 20 18:25:17.085853 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 20 18:25:17.085858 kernel: printk: legacy console [ttyAMA0] enabled Jun 20 18:25:17.085863 kernel: printk: legacy bootconsole [pl11] disabled Jun 20 18:25:17.085867 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 20 18:25:17.085872 kernel: ACPI: CPU0 has been hot-added Jun 20 18:25:17.085877 kernel: ACPI: CPU1 has been hot-added Jun 20 18:25:17.085882 kernel: iommu: Default domain type: Translated Jun 20 18:25:17.085887 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 20 18:25:17.085891 kernel: efivars: Registered efivars operations Jun 20 18:25:17.085896 kernel: vgaarb: loaded Jun 20 18:25:17.085901 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 20 18:25:17.085905 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:25:17.085910 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:25:17.085915 kernel: pnp: PnP ACPI init Jun 20 18:25:17.085919 kernel: pnp: PnP ACPI: found 0 devices Jun 20 18:25:17.085925 kernel: NET: Registered PF_INET protocol family Jun 20 18:25:17.085930 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:25:17.085934 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 18:25:17.085939 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:25:17.085944 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:25:17.085949 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 18:25:17.085953 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 18:25:17.085958 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:25:17.085963 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 18:25:17.085968 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:25:17.085973 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:25:17.085978 kernel: kvm [1]: HYP mode not available Jun 20 18:25:17.085982 kernel: Initialise system trusted keyrings Jun 20 18:25:17.085987 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 18:25:17.085992 kernel: Key type asymmetric registered Jun 20 18:25:17.085996 kernel: Asymmetric key parser 'x509' registered Jun 20 18:25:17.086001 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 20 18:25:17.086006 kernel: io scheduler mq-deadline registered Jun 20 18:25:17.086011 kernel: io scheduler kyber registered Jun 20 18:25:17.086016 kernel: io scheduler bfq registered Jun 20 18:25:17.086020 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:25:17.086025 kernel: thunder_xcv, ver 1.0 Jun 20 18:25:17.086030 kernel: thunder_bgx, ver 1.0 Jun 20 18:25:17.086034 kernel: nicpf, ver 1.0 Jun 20 18:25:17.086039 kernel: nicvf, ver 1.0 Jun 20 18:25:17.086147 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 20 18:25:17.086199 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-06-20T18:25:16 UTC (1750443916) Jun 20 18:25:17.086205 kernel: efifb: probing for efifb Jun 20 18:25:17.086210 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 18:25:17.086215 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 18:25:17.086220 kernel: efifb: scrolling: redraw Jun 20 18:25:17.086225 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:25:17.086229 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:25:17.086234 kernel: fb0: EFI VGA frame buffer device Jun 20 18:25:17.086239 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 20 18:25:17.086245 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:25:17.086249 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jun 20 18:25:17.086254 kernel: watchdog: NMI not fully supported Jun 20 18:25:17.086259 kernel: watchdog: Hard watchdog permanently disabled Jun 20 18:25:17.086263 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:25:17.086268 kernel: Segment Routing with IPv6 Jun 20 18:25:17.086273 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:25:17.086277 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:25:17.086282 kernel: Key type dns_resolver registered Jun 20 18:25:17.086288 kernel: registered taskstats version 1 Jun 20 18:25:17.086292 kernel: Loading compiled-in X.509 certificates Jun 20 18:25:17.086297 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 4dab98fc4de70d482d00f54d1877f6231fc25377' Jun 20 18:25:17.086302 kernel: Demotion targets for Node 0: null Jun 20 18:25:17.086306 kernel: Key type .fscrypt registered Jun 20 18:25:17.086311 kernel: Key type fscrypt-provisioning registered Jun 20 18:25:17.086316 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:25:17.086320 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:25:17.086325 kernel: ima: No architecture policies found Jun 20 18:25:17.086331 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 20 18:25:17.086335 kernel: clk: Disabling unused clocks Jun 20 18:25:17.086340 kernel: PM: genpd: Disabling unused power domains Jun 20 18:25:17.086345 kernel: Warning: unable to open an initial console. Jun 20 18:25:17.086350 kernel: Freeing unused kernel memory: 39424K Jun 20 18:25:17.086354 kernel: Run /init as init process Jun 20 18:25:17.086359 kernel: with arguments: Jun 20 18:25:17.086364 kernel: /init Jun 20 18:25:17.086368 kernel: with environment: Jun 20 18:25:17.086373 kernel: HOME=/ Jun 20 18:25:17.086378 kernel: TERM=linux Jun 20 18:25:17.086383 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:25:17.086388 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:25:17.086395 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:25:17.086401 systemd[1]: Detected virtualization microsoft. Jun 20 18:25:17.086406 systemd[1]: Detected architecture arm64. Jun 20 18:25:17.086411 systemd[1]: Running in initrd. Jun 20 18:25:17.086416 systemd[1]: No hostname configured, using default hostname. Jun 20 18:25:17.086422 systemd[1]: Hostname set to . Jun 20 18:25:17.086427 systemd[1]: Initializing machine ID from random generator. Jun 20 18:25:17.086432 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:25:17.086437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:25:17.086442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:25:17.086447 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:25:17.086453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:25:17.086459 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:25:17.086464 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:25:17.086470 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:25:17.086475 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:25:17.086480 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:25:17.086485 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:25:17.086491 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:25:17.086496 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:25:17.086501 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:25:17.086507 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:25:17.086512 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:25:17.086517 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:25:17.086522 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:25:17.086527 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:25:17.086532 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:25:17.086538 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:25:17.086543 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:25:17.086548 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:25:17.086563 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:25:17.086568 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:25:17.086573 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:25:17.086579 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 18:25:17.086584 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:25:17.086590 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:25:17.086595 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:25:17.086611 systemd-journald[224]: Collecting audit messages is disabled. Jun 20 18:25:17.086624 systemd-journald[224]: Journal started Jun 20 18:25:17.086639 systemd-journald[224]: Runtime Journal (/run/log/journal/0dd58ccd830f423490ef418d6d9e0364) is 8M, max 78.5M, 70.5M free. Jun 20 18:25:17.090590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:17.095331 systemd-modules-load[226]: Inserted module 'overlay' Jun 20 18:25:17.111578 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:25:17.111615 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:25:17.124206 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:25:17.136000 kernel: Bridge firewalling registered Jun 20 18:25:17.132774 systemd-modules-load[226]: Inserted module 'br_netfilter' Jun 20 18:25:17.140389 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:25:17.145760 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:25:17.149285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:25:17.159037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:17.169004 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:25:17.191009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:25:17.203755 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:25:17.214718 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:25:17.229448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:25:17.241735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:25:17.249060 systemd-tmpfiles[253]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 18:25:17.261933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:25:17.273570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:25:17.286096 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:25:17.313482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:25:17.319156 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:25:17.337280 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=dc27555a94b81892dd9ef4952a54bd9fdf9ae918511eccef54084541db330bac Jun 20 18:25:17.366533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:25:17.383191 systemd-resolved[264]: Positive Trust Anchors: Jun 20 18:25:17.383205 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:25:17.383224 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:25:17.384965 systemd-resolved[264]: Defaulting to hostname 'linux'. Jun 20 18:25:17.386627 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:25:17.392052 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:25:17.476566 kernel: SCSI subsystem initialized Jun 20 18:25:17.481570 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:25:17.489592 kernel: iscsi: registered transport (tcp) Jun 20 18:25:17.502721 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:25:17.502759 kernel: QLogic iSCSI HBA Driver Jun 20 18:25:17.515779 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:25:17.534950 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:25:17.541693 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:25:17.590530 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:25:17.596367 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:25:17.660573 kernel: raid6: neonx8 gen() 18556 MB/s Jun 20 18:25:17.677559 kernel: raid6: neonx4 gen() 18553 MB/s Jun 20 18:25:17.696560 kernel: raid6: neonx2 gen() 17081 MB/s Jun 20 18:25:17.716559 kernel: raid6: neonx1 gen() 15001 MB/s Jun 20 18:25:17.735559 kernel: raid6: int64x8 gen() 10552 MB/s Jun 20 18:25:17.754559 kernel: raid6: int64x4 gen() 10611 MB/s Jun 20 18:25:17.774651 kernel: raid6: int64x2 gen() 8989 MB/s Jun 20 18:25:17.796252 kernel: raid6: int64x1 gen() 7009 MB/s Jun 20 18:25:17.796319 kernel: raid6: using algorithm neonx8 gen() 18556 MB/s Jun 20 18:25:17.818125 kernel: raid6: .... xor() 14905 MB/s, rmw enabled Jun 20 18:25:17.818132 kernel: raid6: using neon recovery algorithm Jun 20 18:25:17.826212 kernel: xor: measuring software checksum speed Jun 20 18:25:17.826220 kernel: 8regs : 28621 MB/sec Jun 20 18:25:17.828844 kernel: 32regs : 28818 MB/sec Jun 20 18:25:17.832508 kernel: arm64_neon : 37676 MB/sec Jun 20 18:25:17.836645 kernel: xor: using function: arm64_neon (37676 MB/sec) Jun 20 18:25:17.874574 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:25:17.879840 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:25:17.889218 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:25:17.919737 systemd-udevd[475]: Using default interface naming scheme 'v255'. Jun 20 18:25:17.923915 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:25:17.936670 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:25:17.964047 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jun 20 18:25:17.987617 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:25:17.994722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:25:18.041992 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:25:18.051022 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:25:18.118761 kernel: hv_vmbus: Vmbus version:5.3 Jun 20 18:25:18.119137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:18.120860 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:18.151309 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 18:25:18.151332 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 18:25:18.151339 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jun 20 18:25:18.144947 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:18.192733 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 18:25:18.192752 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 18:25:18.192886 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 18:25:18.192893 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 18:25:18.192899 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 18:25:18.192909 kernel: scsi host0: storvsc_host_t Jun 20 18:25:18.174141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:18.223377 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jun 20 18:25:18.223394 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 20 18:25:18.223421 kernel: scsi host1: storvsc_host_t Jun 20 18:25:18.213724 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:25:18.240960 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jun 20 18:25:18.222975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:18.223057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:18.233352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:18.274588 kernel: PTP clock support registered Jun 20 18:25:18.285021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:18.327103 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 18:25:18.327120 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 20 18:25:18.327258 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 18:25:18.327341 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 20 18:25:18.327405 kernel: hv_netvsc 002248c0-fc7c-0022-48c0-fc7c002248c0 eth0: VF slot 1 added Jun 20 18:25:18.327469 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:25:18.327529 kernel: hv_vmbus: registering driver hv_utils Jun 20 18:25:18.327537 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:25:18.327545 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 20 18:25:18.332892 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 20 18:25:18.345845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#70 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:18.345989 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 18:25:18.345998 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 18:25:18.346005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#88 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:18.346067 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 18:25:18.685609 systemd-resolved[264]: Clock change detected. Flushing caches. Jun 20 18:25:18.699061 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 18:25:18.710260 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:25:18.710296 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:25:18.724303 kernel: hv_vmbus: registering driver hv_pci Jun 20 18:25:18.724342 kernel: hv_pci 3f402efe-39e3-477d-8fc5-c41ae04f5256: PCI VMBus probing: Using version 0x10004 Jun 20 18:25:18.740031 kernel: hv_pci 3f402efe-39e3-477d-8fc5-c41ae04f5256: PCI host bridge to bus 39e3:00 Jun 20 18:25:18.740143 kernel: pci_bus 39e3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 20 18:25:18.745772 kernel: pci_bus 39e3:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 18:25:18.753420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#121 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:25:18.753549 kernel: pci 39e3:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jun 20 18:25:18.767334 kernel: pci 39e3:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 20 18:25:18.772319 kernel: pci 39e3:00:02.0: enabling Extended Tags Jun 20 18:25:18.783425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#85 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:25:18.797377 kernel: pci 39e3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 39e3:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jun 20 18:25:18.809847 kernel: pci_bus 39e3:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 18:25:18.809975 kernel: pci 39e3:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jun 20 18:25:18.870662 kernel: mlx5_core 39e3:00:02.0: enabling device (0000 -> 0002) Jun 20 18:25:18.878963 kernel: mlx5_core 39e3:00:02.0: PTM is not supported by PCIe Jun 20 18:25:18.879052 kernel: mlx5_core 39e3:00:02.0: firmware version: 16.30.5006 Jun 20 18:25:19.054408 kernel: hv_netvsc 002248c0-fc7c-0022-48c0-fc7c002248c0 eth0: VF registering: eth1 Jun 20 18:25:19.054649 kernel: mlx5_core 39e3:00:02.0 eth1: joined to eth0 Jun 20 18:25:19.060419 kernel: mlx5_core 39e3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 20 18:25:19.072307 kernel: mlx5_core 39e3:00:02.0 enP14819s1: renamed from eth1 Jun 20 18:25:19.862829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:25:19.901575 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 20 18:25:19.964100 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 20 18:25:20.037358 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 20 18:25:20.043049 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 20 18:25:20.055370 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:25:20.071408 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:25:20.083122 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:25:20.101615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:25:20.123442 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:25:20.133422 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:25:20.165307 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:20.167692 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:25:20.183439 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:25:20.192301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:20.202312 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:25:21.210801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jun 20 18:25:21.222319 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:25:21.222911 disk-uuid[656]: The operation has completed successfully. Jun 20 18:25:21.286366 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:25:21.286455 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:25:21.313919 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:25:21.335203 sh[817]: Success Jun 20 18:25:21.427512 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:25:21.427571 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:25:21.432458 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 18:25:21.441327 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jun 20 18:25:21.880662 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:25:21.888575 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:25:21.903987 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:25:21.923298 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 18:25:21.936935 kernel: BTRFS: device fsid eac9c4a0-5098-4f12-a7ad-af09956ff0e3 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (835) Jun 20 18:25:21.936966 kernel: BTRFS info (device dm-0): first mount of filesystem eac9c4a0-5098-4f12-a7ad-af09956ff0e3 Jun 20 18:25:21.941603 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:25:21.944914 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 18:25:22.613938 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:25:22.618197 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:25:22.626233 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:25:22.627060 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:25:22.648909 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:25:22.673305 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (858) Jun 20 18:25:22.683917 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:22.683957 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:25:22.687859 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:25:22.750955 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:25:22.763443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:25:22.780436 kernel: BTRFS info (device sda6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:22.783418 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:25:22.789083 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:25:22.814879 systemd-networkd[1000]: lo: Link UP Jun 20 18:25:22.814890 systemd-networkd[1000]: lo: Gained carrier Jun 20 18:25:22.816091 systemd-networkd[1000]: Enumeration completed Jun 20 18:25:22.818016 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:25:22.818227 systemd-networkd[1000]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:22.818231 systemd-networkd[1000]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:25:22.823152 systemd[1]: Reached target network.target - Network. Jun 20 18:25:22.898308 kernel: mlx5_core 39e3:00:02.0 enP14819s1: Link up Jun 20 18:25:22.933937 systemd-networkd[1000]: enP14819s1: Link UP Jun 20 18:25:22.938092 kernel: hv_netvsc 002248c0-fc7c-0022-48c0-fc7c002248c0 eth0: Data path switched to VF: enP14819s1 Jun 20 18:25:22.933991 systemd-networkd[1000]: eth0: Link UP Jun 20 18:25:22.934079 systemd-networkd[1000]: eth0: Gained carrier Jun 20 18:25:22.934087 systemd-networkd[1000]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:22.953652 systemd-networkd[1000]: enP14819s1: Gained carrier Jun 20 18:25:22.973322 systemd-networkd[1000]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:25:24.373423 systemd-networkd[1000]: eth0: Gained IPv6LL Jun 20 18:25:24.374078 systemd-networkd[1000]: enP14819s1: Gained IPv6LL Jun 20 18:25:25.246172 ignition[1005]: Ignition 2.21.0 Jun 20 18:25:25.246184 ignition[1005]: Stage: fetch-offline Jun 20 18:25:25.248400 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:25:25.246253 ignition[1005]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:25.257415 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:25:25.246259 ignition[1005]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:25.246391 ignition[1005]: parsed url from cmdline: "" Jun 20 18:25:25.246394 ignition[1005]: no config URL provided Jun 20 18:25:25.246397 ignition[1005]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:25:25.246402 ignition[1005]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:25:25.246406 ignition[1005]: failed to fetch config: resource requires networking Jun 20 18:25:25.246527 ignition[1005]: Ignition finished successfully Jun 20 18:25:25.289773 ignition[1014]: Ignition 2.21.0 Jun 20 18:25:25.289778 ignition[1014]: Stage: fetch Jun 20 18:25:25.289918 ignition[1014]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:25.289924 ignition[1014]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:25.289980 ignition[1014]: parsed url from cmdline: "" Jun 20 18:25:25.289982 ignition[1014]: no config URL provided Jun 20 18:25:25.289985 ignition[1014]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:25:25.289990 ignition[1014]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:25:25.290018 ignition[1014]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 18:25:25.408022 ignition[1014]: GET result: OK Jun 20 18:25:25.408188 ignition[1014]: config has been read from IMDS userdata Jun 20 18:25:25.410826 unknown[1014]: fetched base config from "system" Jun 20 18:25:25.408218 ignition[1014]: parsing config with SHA512: 69c0b84beed311247549164780083dad912f5d4ac8f1800ba350c7f3cd8c185bef0b0d0e750467b5645baa950d7d1178f69a809eb7c0935d157f5b9ffefa6a02 Jun 20 18:25:25.410832 unknown[1014]: fetched base config from "system" Jun 20 18:25:25.411049 ignition[1014]: fetch: fetch complete Jun 20 18:25:25.410836 unknown[1014]: fetched user config from "azure" Jun 20 18:25:25.411053 ignition[1014]: fetch: fetch passed Jun 20 18:25:25.416594 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:25:25.411101 ignition[1014]: Ignition finished successfully Jun 20 18:25:25.424848 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:25:25.462626 ignition[1020]: Ignition 2.21.0 Jun 20 18:25:25.462639 ignition[1020]: Stage: kargs Jun 20 18:25:25.462807 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:25.470526 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:25:25.462814 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:25.479439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:25:25.463491 ignition[1020]: kargs: kargs passed Jun 20 18:25:25.463548 ignition[1020]: Ignition finished successfully Jun 20 18:25:25.507556 ignition[1027]: Ignition 2.21.0 Jun 20 18:25:25.507568 ignition[1027]: Stage: disks Jun 20 18:25:25.511566 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:25:25.507737 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:25.519274 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:25:25.507745 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:25.527791 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:25:25.508448 ignition[1027]: disks: disks passed Jun 20 18:25:25.537554 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:25:25.508493 ignition[1027]: Ignition finished successfully Jun 20 18:25:25.547379 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:25:25.556802 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:25:25.567819 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:25:25.777072 systemd-fsck[1036]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jun 20 18:25:25.782475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:25:25.799404 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:25:26.155315 kernel: EXT4-fs (sda9): mounted filesystem 40d60ae8-3eda-4465-8dd7-9dbfcfd71664 r/w with ordered data mode. Quota mode: none. Jun 20 18:25:26.155793 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:25:26.162407 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:25:26.199897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:25:26.205646 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:25:26.226223 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:25:26.238489 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:25:26.238572 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:25:26.257129 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:25:26.284184 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1051) Jun 20 18:25:26.284227 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:26.279540 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:25:26.302418 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:25:26.302434 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:25:26.305335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:25:27.371719 coreos-metadata[1053]: Jun 20 18:25:27.371 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:25:27.379262 coreos-metadata[1053]: Jun 20 18:25:27.379 INFO Fetch successful Jun 20 18:25:27.383608 coreos-metadata[1053]: Jun 20 18:25:27.383 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:25:27.392235 coreos-metadata[1053]: Jun 20 18:25:27.392 INFO Fetch successful Jun 20 18:25:27.432248 coreos-metadata[1053]: Jun 20 18:25:27.432 INFO wrote hostname ci-4344.1.0-a-442b0d77ef to /sysroot/etc/hostname Jun 20 18:25:27.439755 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:25:27.948889 initrd-setup-root[1082]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:25:27.957977 initrd-setup-root[1089]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:25:27.963394 initrd-setup-root[1096]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:25:28.003345 initrd-setup-root[1103]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:25:30.184447 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:25:30.192536 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:25:30.219939 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:25:30.233168 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:25:30.244877 kernel: BTRFS info (device sda6): last unmount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:30.262346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:25:30.266932 ignition[1171]: INFO : Ignition 2.21.0 Jun 20 18:25:30.266932 ignition[1171]: INFO : Stage: mount Jun 20 18:25:30.266932 ignition[1171]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:30.266932 ignition[1171]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:30.266932 ignition[1171]: INFO : mount: mount passed Jun 20 18:25:30.266932 ignition[1171]: INFO : Ignition finished successfully Jun 20 18:25:30.270773 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:25:30.279524 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:25:30.307488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:25:30.327311 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 (8:6) scanned by mount (1184) Jun 20 18:25:30.339013 kernel: BTRFS info (device sda6): first mount of filesystem 12707c76-7149-46df-b84b-cd861666e01a Jun 20 18:25:30.339047 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 20 18:25:30.342321 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 18:25:30.345561 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:25:30.369048 ignition[1202]: INFO : Ignition 2.21.0 Jun 20 18:25:30.369048 ignition[1202]: INFO : Stage: files Jun 20 18:25:30.377087 ignition[1202]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:30.377087 ignition[1202]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:30.377087 ignition[1202]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:25:30.377087 ignition[1202]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:25:30.377087 ignition[1202]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:25:30.485799 ignition[1202]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:25:30.493659 ignition[1202]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:25:30.493659 ignition[1202]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:25:30.486177 unknown[1202]: wrote ssh authorized keys file for user: core Jun 20 18:25:30.524171 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 18:25:30.531728 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jun 20 18:25:30.557410 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:25:30.635490 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jun 20 18:25:30.635490 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:25:30.650256 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:25:30.733666 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:25:30.733666 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:25:30.733666 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jun 20 18:25:31.880923 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 18:25:32.109182 ignition[1202]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jun 20 18:25:32.109182 ignition[1202]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 20 18:25:32.143840 ignition[1202]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:25:32.152901 ignition[1202]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:25:32.152901 ignition[1202]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 20 18:25:32.152901 ignition[1202]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:25:32.185314 ignition[1202]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:25:32.185314 ignition[1202]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:25:32.185314 ignition[1202]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:25:32.185314 ignition[1202]: INFO : files: files passed Jun 20 18:25:32.185314 ignition[1202]: INFO : Ignition finished successfully Jun 20 18:25:32.163787 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:25:32.177553 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:25:32.213038 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:25:32.220877 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:25:32.221002 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:25:32.286529 initrd-setup-root-after-ignition[1231]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:25:32.286529 initrd-setup-root-after-ignition[1231]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:25:32.300396 initrd-setup-root-after-ignition[1235]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:25:32.294561 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:25:32.305375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:25:32.316597 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:25:32.349703 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:25:32.349810 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:25:32.359546 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:25:32.369948 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:25:32.378352 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:25:32.379174 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:25:32.413879 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:25:32.422162 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:25:32.454416 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:25:32.459562 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:25:32.469575 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:25:32.479122 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:25:32.479224 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:25:32.491536 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:25:32.495986 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:25:32.504570 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:25:32.513046 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:25:32.521471 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:25:32.530147 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 18:25:32.539921 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:25:32.550485 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:25:32.559859 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:25:32.568333 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:25:32.577313 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:25:32.584703 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:25:32.584810 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:25:32.596120 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:25:32.600616 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:25:32.609264 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:25:32.609331 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:25:32.618996 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:25:32.619084 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:25:32.632149 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:25:32.632229 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:25:32.637839 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:25:32.637909 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:25:32.646455 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:25:32.711159 ignition[1255]: INFO : Ignition 2.21.0 Jun 20 18:25:32.711159 ignition[1255]: INFO : Stage: umount Jun 20 18:25:32.711159 ignition[1255]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:25:32.711159 ignition[1255]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:25:32.711159 ignition[1255]: INFO : umount: umount passed Jun 20 18:25:32.711159 ignition[1255]: INFO : Ignition finished successfully Jun 20 18:25:32.646527 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:25:32.657954 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:25:32.686983 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:25:32.700134 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:25:32.700264 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:25:32.709511 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:25:32.709621 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:25:32.720457 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:25:32.721183 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:25:32.722070 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:25:32.728811 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:25:32.728912 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:25:32.736028 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:25:32.736067 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:25:32.746322 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:25:32.746357 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:25:32.755590 systemd[1]: Stopped target network.target - Network. Jun 20 18:25:32.769220 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:25:32.769297 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:25:32.778216 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:25:32.785611 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:25:32.789737 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:25:32.795516 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:25:32.804002 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:25:32.812863 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:25:32.812908 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:25:32.821376 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:25:32.821405 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:25:32.828875 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:25:32.828927 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:25:32.836724 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:25:32.836755 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:25:32.845085 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:25:32.853056 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:25:32.865575 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:25:32.865663 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:25:32.875930 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:25:32.876025 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:25:32.891129 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:25:32.891333 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:25:32.891428 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:25:32.904615 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:25:33.087204 kernel: hv_netvsc 002248c0-fc7c-0022-48c0-fc7c002248c0 eth0: Data path switched from VF: enP14819s1 Jun 20 18:25:32.904800 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:25:32.904888 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:25:32.914971 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 18:25:32.922147 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:25:32.922187 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:25:32.931079 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:25:32.931144 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:25:32.941976 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:25:32.954640 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:25:32.954704 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:25:32.964381 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:25:32.964432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:25:32.972807 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:25:32.972855 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:25:32.980778 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:25:32.980880 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:25:32.993756 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:25:33.001708 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:25:33.001765 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:25:33.026219 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:25:33.031309 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:25:33.040081 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:25:33.040118 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:25:33.048245 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:25:33.048268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:25:33.057369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:25:33.057408 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:25:33.071580 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:25:33.071622 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:25:33.094901 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:25:33.094953 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:25:33.112450 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:25:33.126142 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 18:25:33.126211 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:25:33.136732 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:25:33.136772 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:25:33.153226 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 18:25:33.153276 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:25:33.168475 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:25:33.168513 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:25:33.174183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:33.174219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:33.350527 systemd-journald[224]: Received SIGTERM from PID 1 (systemd). Jun 20 18:25:33.193446 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 18:25:33.193490 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jun 20 18:25:33.193511 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:25:33.193533 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:25:33.193798 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:25:33.193886 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:25:33.201386 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:25:33.201445 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:25:33.212206 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:25:33.222579 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:25:33.243419 systemd[1]: Switching root. Jun 20 18:25:33.407335 systemd-journald[224]: Journal stopped Jun 20 18:25:44.119672 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:25:44.119690 kernel: SELinux: policy capability open_perms=1 Jun 20 18:25:44.119698 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:25:44.119703 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:25:44.119710 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:25:44.119715 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:25:44.119721 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:25:44.119726 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:25:44.119731 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 18:25:44.119736 kernel: audit: type=1403 audit(1750443935.164:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:25:44.119743 systemd[1]: Successfully loaded SELinux policy in 200.296ms. Jun 20 18:25:44.119751 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.048ms. Jun 20 18:25:44.119757 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:25:44.119764 systemd[1]: Detected virtualization microsoft. Jun 20 18:25:44.119770 systemd[1]: Detected architecture arm64. Jun 20 18:25:44.119778 systemd[1]: Detected first boot. Jun 20 18:25:44.119785 systemd[1]: Hostname set to . Jun 20 18:25:44.119791 systemd[1]: Initializing machine ID from random generator. Jun 20 18:25:44.119796 zram_generator::config[1298]: No configuration found. Jun 20 18:25:44.119802 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:25:44.119808 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:25:44.119814 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:25:44.119821 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:25:44.119827 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:25:44.119833 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:25:44.119839 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:25:44.119845 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:25:44.119851 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:25:44.119857 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:25:44.119863 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:25:44.119869 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:25:44.119875 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:25:44.119881 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:25:44.119887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:25:44.119893 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:25:44.119899 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:25:44.119905 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:25:44.119911 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:25:44.119918 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:25:44.119924 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 20 18:25:44.119931 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:25:44.119937 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:25:44.119943 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:25:44.119949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:25:44.119955 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:25:44.119962 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:25:44.119968 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:25:44.119974 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:25:44.119980 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:25:44.119986 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:25:44.119992 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:25:44.119998 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:25:44.120005 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:25:44.120011 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:25:44.120017 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:25:44.120023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:25:44.120029 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:25:44.120036 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:25:44.120043 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:25:44.120050 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:25:44.120056 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:25:44.120062 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:25:44.120068 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:25:44.120074 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:25:44.120080 systemd[1]: Reached target machines.target - Containers. Jun 20 18:25:44.120086 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:25:44.120093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:25:44.120100 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:25:44.120106 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:25:44.120112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:25:44.120118 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:25:44.120124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:25:44.120130 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:25:44.120136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:25:44.120142 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:25:44.120149 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:25:44.120155 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:25:44.120161 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:25:44.120168 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:25:44.120175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:25:44.120181 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:25:44.120187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:25:44.120193 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:25:44.120200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:25:44.120206 kernel: loop: module loaded Jun 20 18:25:44.120212 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:25:44.120218 kernel: fuse: init (API version 7.41) Jun 20 18:25:44.120223 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:25:44.120229 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:25:44.120235 systemd[1]: Stopped verity-setup.service. Jun 20 18:25:44.120241 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:25:44.120247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:25:44.120254 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:25:44.120271 systemd-journald[1395]: Collecting audit messages is disabled. Jun 20 18:25:44.120284 kernel: ACPI: bus type drm_connector registered Jun 20 18:25:44.120305 systemd-journald[1395]: Journal started Jun 20 18:25:44.120319 systemd-journald[1395]: Runtime Journal (/run/log/journal/4e29b0e0ac374868a5c31295f362903f) is 8M, max 78.5M, 70.5M free. Jun 20 18:25:43.266028 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:25:43.271849 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:25:43.272248 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:25:43.273522 systemd[1]: systemd-journald.service: Consumed 2.642s CPU time. Jun 20 18:25:44.139514 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:25:44.140148 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:25:44.146000 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:25:44.153556 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:25:44.158870 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:25:44.165193 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:25:44.171461 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:25:44.171605 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:25:44.177393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:25:44.177507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:25:44.182884 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:25:44.182996 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:25:44.188355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:25:44.188472 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:25:44.194421 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:25:44.194542 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:25:44.202524 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:25:44.202651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:25:44.210322 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:25:44.215768 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:25:44.222759 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:25:44.228865 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:25:44.235572 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:25:44.250636 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:25:44.257193 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:25:44.271547 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:25:44.276699 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:25:44.276729 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:25:44.282963 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:25:44.289959 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:25:44.294777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:25:44.301040 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:25:44.307410 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:25:44.312829 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:25:44.314417 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:25:44.319587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:25:44.320500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:25:44.328087 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:25:44.335392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:25:44.342041 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:25:44.347976 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:25:44.364171 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:25:44.371924 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:25:44.384641 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:25:44.385328 kernel: loop0: detected capacity change from 0 to 211168 Jun 20 18:25:44.419637 systemd-journald[1395]: Time spent on flushing to /var/log/journal/4e29b0e0ac374868a5c31295f362903f is 18.132ms for 944 entries. Jun 20 18:25:44.419637 systemd-journald[1395]: System Journal (/var/log/journal/4e29b0e0ac374868a5c31295f362903f) is 8M, max 2.6G, 2.6G free. Jun 20 18:25:44.533553 systemd-journald[1395]: Received client request to flush runtime journal. Jun 20 18:25:44.533606 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:25:44.534793 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:25:44.565256 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:25:44.566665 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:25:44.587309 kernel: loop1: detected capacity change from 0 to 138376 Jun 20 18:25:44.592230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:25:44.698234 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jun 20 18:25:44.698249 systemd-tmpfiles[1439]: ACLs are not supported, ignoring. Jun 20 18:25:44.703318 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:25:44.710874 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:25:45.220083 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:25:45.227934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:25:45.245778 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jun 20 18:25:45.246045 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Jun 20 18:25:45.249412 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:25:45.617318 kernel: loop2: detected capacity change from 0 to 107312 Jun 20 18:25:45.903179 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:25:45.910143 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:25:45.934133 systemd-udevd[1463]: Using default interface naming scheme 'v255'. Jun 20 18:25:46.370314 kernel: loop3: detected capacity change from 0 to 28936 Jun 20 18:25:46.638270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:25:46.650995 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:25:46.695673 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 20 18:25:46.750236 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:25:46.791409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#176 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jun 20 18:25:46.862036 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:25:46.856693 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:25:46.931312 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 18:25:46.939651 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 18:25:46.939725 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 18:25:46.939740 kernel: hv_vmbus: registering driver hv_balloon Jun 20 18:25:46.948259 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:25:46.948346 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 18:25:46.953500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:46.965955 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 20 18:25:46.966013 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:25:46.976129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:46.977346 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:46.983501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:46.990774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:25:46.990936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:46.997427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:25:47.065314 kernel: loop4: detected capacity change from 0 to 211168 Jun 20 18:25:47.077300 kernel: loop5: detected capacity change from 0 to 138376 Jun 20 18:25:47.077978 systemd-networkd[1485]: lo: Link UP Jun 20 18:25:47.077985 systemd-networkd[1485]: lo: Gained carrier Jun 20 18:25:47.079435 systemd-networkd[1485]: Enumeration completed Jun 20 18:25:47.079537 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:25:47.084535 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:47.084540 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:25:47.087425 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:25:47.094112 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:25:47.107382 kernel: loop6: detected capacity change from 0 to 107312 Jun 20 18:25:47.119355 kernel: loop7: detected capacity change from 0 to 28936 Jun 20 18:25:47.122199 (sd-merge)[1544]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 18:25:47.122654 (sd-merge)[1544]: Merged extensions into '/usr'. Jun 20 18:25:47.125098 systemd[1]: Reload requested from client PID 1437 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:25:47.125111 systemd[1]: Reloading... Jun 20 18:25:47.155309 kernel: mlx5_core 39e3:00:02.0 enP14819s1: Link up Jun 20 18:25:47.181111 kernel: hv_netvsc 002248c0-fc7c-0022-48c0-fc7c002248c0 eth0: Data path switched to VF: enP14819s1 Jun 20 18:25:47.187353 systemd-networkd[1485]: enP14819s1: Link UP Jun 20 18:25:47.187606 systemd-networkd[1485]: eth0: Link UP Jun 20 18:25:47.187610 systemd-networkd[1485]: eth0: Gained carrier Jun 20 18:25:47.187628 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:25:47.188329 zram_generator::config[1584]: No configuration found. Jun 20 18:25:47.189717 systemd-networkd[1485]: enP14819s1: Gained carrier Jun 20 18:25:47.198341 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:25:47.219321 kernel: MACsec IEEE 802.1AE Jun 20 18:25:47.274572 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:25:47.392811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:25:47.398447 systemd[1]: Reloading finished in 273 ms. Jun 20 18:25:47.422477 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:25:47.429039 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:25:47.459335 systemd[1]: Starting ensure-sysext.service... Jun 20 18:25:47.465412 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:25:47.473991 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:25:47.486081 systemd[1]: Reload requested from client PID 1689 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:25:47.486094 systemd[1]: Reloading... Jun 20 18:25:47.523416 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 18:25:47.523438 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 18:25:47.523649 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:25:47.523791 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:25:47.524227 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:25:47.525474 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Jun 20 18:25:47.525630 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Jun 20 18:25:47.546360 zram_generator::config[1725]: No configuration found. Jun 20 18:25:47.559731 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:25:47.559740 systemd-tmpfiles[1691]: Skipping /boot Jun 20 18:25:47.567763 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:25:47.567890 systemd-tmpfiles[1691]: Skipping /boot Jun 20 18:25:47.623712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:25:47.701774 systemd[1]: Reloading finished in 215 ms. Jun 20 18:25:47.721563 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:25:47.727447 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:25:47.738885 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:25:47.749045 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:25:47.756070 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:25:47.762480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:25:47.768863 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:25:47.783486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:25:47.792543 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:25:47.802192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:25:47.808497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:25:47.814506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:25:47.819210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:25:47.819361 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:25:47.821650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:25:47.822375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:25:47.827949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:25:47.828074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:25:47.834004 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:25:47.834116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:25:47.840612 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:25:47.850220 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:25:47.851737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:25:47.860177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:25:47.870112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:25:47.877409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:25:47.877516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:25:47.883034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:25:47.886386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:25:47.892125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:25:47.892276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:25:47.898455 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:25:47.898590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:25:47.910365 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:25:47.916828 systemd[1]: Finished ensure-sysext.service. Jun 20 18:25:47.922234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:25:47.923465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:25:47.928284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:25:47.928367 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:25:47.928390 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:25:47.928432 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:25:47.928459 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:25:47.935312 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:25:47.935461 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:25:47.954005 systemd-resolved[1788]: Positive Trust Anchors: Jun 20 18:25:47.954017 systemd-resolved[1788]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:25:47.954037 systemd-resolved[1788]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:25:48.025346 systemd-resolved[1788]: Using system hostname 'ci-4344.1.0-a-442b0d77ef'. Jun 20 18:25:48.026792 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:25:48.031599 systemd[1]: Reached target network.target - Network. Jun 20 18:25:48.035769 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:25:48.094179 augenrules[1828]: No rules Jun 20 18:25:48.095430 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:25:48.095620 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:25:48.694010 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:25:48.700392 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:25:49.077483 systemd-networkd[1485]: eth0: Gained IPv6LL Jun 20 18:25:49.080015 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:25:49.085873 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:25:49.205511 systemd-networkd[1485]: enP14819s1: Gained IPv6LL Jun 20 18:25:57.026309 ldconfig[1432]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:25:57.036617 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:25:57.043844 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:25:57.056731 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:25:57.062152 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:25:57.071759 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:25:57.077676 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:25:57.084584 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:25:57.089892 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:25:57.096367 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:25:57.102347 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:25:57.102459 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:25:57.106578 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:25:57.114353 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:25:57.120433 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:25:57.126069 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:25:57.131565 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:25:57.136937 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:25:57.143065 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:25:57.149529 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:25:57.155210 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:25:57.160811 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:25:57.164688 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:25:57.168889 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:25:57.168910 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:25:57.205231 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 18:25:57.214928 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:25:57.234394 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:25:57.241820 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:25:57.246485 (chronyd)[1842]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 18:25:57.247850 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:25:57.261393 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:25:57.268013 jq[1850]: false Jun 20 18:25:57.268123 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:25:57.272556 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:25:57.274097 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 18:25:57.279392 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 18:25:57.280306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:25:57.287952 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:25:57.294464 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:25:57.299592 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:25:57.305411 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:25:57.313366 KVP[1852]: KVP starting; pid is:1852 Jun 20 18:25:57.313623 chronyd[1864]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 18:25:57.315248 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:25:57.320059 KVP[1852]: KVP LIC Version: 3.1 Jun 20 18:25:57.323312 kernel: hv_utils: KVP IC version 4.0 Jun 20 18:25:57.326430 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:25:57.333362 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:25:57.335563 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:25:57.336499 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:25:57.345743 extend-filesystems[1851]: Found /dev/sda6 Jun 20 18:25:57.350456 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:25:57.356577 chronyd[1864]: Timezone right/UTC failed leap second check, ignoring Jun 20 18:25:57.358759 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 18:25:57.356728 chronyd[1864]: Loaded seccomp filter (level 2) Jun 20 18:25:57.365083 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:25:57.371467 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:25:57.374435 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:25:57.376315 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:25:57.376525 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:25:57.381850 jq[1877]: true Jun 20 18:25:57.384542 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:25:57.388533 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:25:57.407989 (ntainerd)[1884]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:25:57.410409 jq[1883]: true Jun 20 18:25:57.416620 extend-filesystems[1851]: Found /dev/sda9 Jun 20 18:25:57.423033 extend-filesystems[1851]: Checking size of /dev/sda9 Jun 20 18:25:57.455888 systemd-logind[1867]: New seat seat0. Jun 20 18:25:57.460795 systemd-logind[1867]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jun 20 18:25:57.460974 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:25:57.474119 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:25:57.516330 tar[1881]: linux-arm64/LICENSE Jun 20 18:25:57.516330 tar[1881]: linux-arm64/helm Jun 20 18:25:57.516946 extend-filesystems[1851]: Old size kept for /dev/sda9 Jun 20 18:25:57.517750 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:25:57.540324 update_engine[1874]: I20250620 18:25:57.524034 1874 main.cc:92] Flatcar Update Engine starting Jun 20 18:25:57.518362 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:25:57.608694 bash[1912]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:25:57.612545 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:25:57.618979 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 18:25:57.668074 dbus-daemon[1848]: [system] SELinux support is enabled Jun 20 18:25:57.668484 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:25:57.676138 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:25:57.676166 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:25:57.683783 update_engine[1874]: I20250620 18:25:57.683645 1874 update_check_scheduler.cc:74] Next update check in 2m25s Jun 20 18:25:57.684314 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:25:57.684332 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:25:57.693706 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:25:57.698598 dbus-daemon[1848]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 18:25:57.721512 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:25:57.756578 sshd_keygen[1869]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:25:57.758931 coreos-metadata[1844]: Jun 20 18:25:57.758 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:25:57.767265 coreos-metadata[1844]: Jun 20 18:25:57.764 INFO Fetch successful Jun 20 18:25:57.767265 coreos-metadata[1844]: Jun 20 18:25:57.765 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 18:25:57.771669 coreos-metadata[1844]: Jun 20 18:25:57.770 INFO Fetch successful Jun 20 18:25:57.771669 coreos-metadata[1844]: Jun 20 18:25:57.770 INFO Fetching http://168.63.129.16/machine/cba23b6b-8485-4e4e-9b83-8bdd11dfba64/fe269087%2Dcd42%2D49e1%2Db3de%2Dcae2b17594e6.%5Fci%2D4344.1.0%2Da%2D442b0d77ef?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 18:25:57.773007 coreos-metadata[1844]: Jun 20 18:25:57.772 INFO Fetch successful Jun 20 18:25:57.774415 coreos-metadata[1844]: Jun 20 18:25:57.773 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:25:57.782228 coreos-metadata[1844]: Jun 20 18:25:57.782 INFO Fetch successful Jun 20 18:25:57.784067 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:25:57.797383 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:25:57.811080 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 18:25:57.826883 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:25:57.827062 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:25:57.839525 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:25:57.854915 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:25:57.865157 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:25:57.875492 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 18:25:57.888074 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:25:57.893663 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:25:57.896439 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 20 18:25:57.903283 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:25:58.090190 tar[1881]: linux-arm64/README.md Jun 20 18:25:58.099598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:25:58.105775 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:25:58.107356 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:25:58.174685 locksmithd[1983]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:25:58.362230 kubelet[2028]: E0620 18:25:58.362111 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:25:58.364120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:25:58.364232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:25:58.364529 systemd[1]: kubelet.service: Consumed 554ms CPU time, 257.2M memory peak. Jun 20 18:25:58.558313 containerd[1884]: time="2025-06-20T18:25:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 18:25:58.560309 containerd[1884]: time="2025-06-20T18:25:58.559553852Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 18:25:58.566266 containerd[1884]: time="2025-06-20T18:25:58.566232692Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.648µs" Jun 20 18:25:58.566266 containerd[1884]: time="2025-06-20T18:25:58.566259828Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 18:25:58.566353 containerd[1884]: time="2025-06-20T18:25:58.566274204Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 18:25:58.566443 containerd[1884]: time="2025-06-20T18:25:58.566422908Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 18:25:58.566460 containerd[1884]: time="2025-06-20T18:25:58.566442948Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 18:25:58.566472 containerd[1884]: time="2025-06-20T18:25:58.566461548Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566521 containerd[1884]: time="2025-06-20T18:25:58.566507492Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566521 containerd[1884]: time="2025-06-20T18:25:58.566519644Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566703 containerd[1884]: time="2025-06-20T18:25:58.566686476Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566719 containerd[1884]: time="2025-06-20T18:25:58.566701676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566719 containerd[1884]: time="2025-06-20T18:25:58.566708740Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566719 containerd[1884]: time="2025-06-20T18:25:58.566713972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566787 containerd[1884]: time="2025-06-20T18:25:58.566775332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566972 containerd[1884]: time="2025-06-20T18:25:58.566956356Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566995 containerd[1884]: time="2025-06-20T18:25:58.566983516Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 18:25:58.566995 containerd[1884]: time="2025-06-20T18:25:58.566992708Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 18:25:58.567023 containerd[1884]: time="2025-06-20T18:25:58.567013852Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 18:25:58.567169 containerd[1884]: time="2025-06-20T18:25:58.567154844Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 18:25:58.567226 containerd[1884]: time="2025-06-20T18:25:58.567210244Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:25:58.583861 containerd[1884]: time="2025-06-20T18:25:58.583822876Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 18:25:58.583861 containerd[1884]: time="2025-06-20T18:25:58.583870724Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 18:25:58.583861 containerd[1884]: time="2025-06-20T18:25:58.583880012Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 18:25:58.583861 containerd[1884]: time="2025-06-20T18:25:58.583889244Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 18:25:58.583861 containerd[1884]: time="2025-06-20T18:25:58.583897420Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 18:25:58.584050 containerd[1884]: time="2025-06-20T18:25:58.583908828Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 18:25:58.584050 containerd[1884]: time="2025-06-20T18:25:58.583917596Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 18:25:58.584050 containerd[1884]: time="2025-06-20T18:25:58.583925604Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 18:25:58.584050 containerd[1884]: time="2025-06-20T18:25:58.583932892Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 18:25:58.584050 containerd[1884]: time="2025-06-20T18:25:58.583939668Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 18:25:58.584050 containerd[1884]: time="2025-06-20T18:25:58.583945500Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 18:25:58.584050 containerd[1884]: time="2025-06-20T18:25:58.583953860Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 18:25:58.584133 containerd[1884]: time="2025-06-20T18:25:58.584080804Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 18:25:58.584133 containerd[1884]: time="2025-06-20T18:25:58.584095164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 18:25:58.584133 containerd[1884]: time="2025-06-20T18:25:58.584114092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 18:25:58.584133 containerd[1884]: time="2025-06-20T18:25:58.584121892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 18:25:58.584133 containerd[1884]: time="2025-06-20T18:25:58.584128892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 18:25:58.584190 containerd[1884]: time="2025-06-20T18:25:58.584135844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 18:25:58.584190 containerd[1884]: time="2025-06-20T18:25:58.584144908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 18:25:58.584190 containerd[1884]: time="2025-06-20T18:25:58.584152036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 18:25:58.584190 containerd[1884]: time="2025-06-20T18:25:58.584159212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 18:25:58.584190 containerd[1884]: time="2025-06-20T18:25:58.584165836Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 18:25:58.584190 containerd[1884]: time="2025-06-20T18:25:58.584172412Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 18:25:58.584262 containerd[1884]: time="2025-06-20T18:25:58.584231404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 18:25:58.584262 containerd[1884]: time="2025-06-20T18:25:58.584242300Z" level=info msg="Start snapshots syncer" Jun 20 18:25:58.584343 containerd[1884]: time="2025-06-20T18:25:58.584271028Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 18:25:58.584481 containerd[1884]: time="2025-06-20T18:25:58.584453556Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 18:25:58.584574 containerd[1884]: time="2025-06-20T18:25:58.584493116Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 18:25:58.584574 containerd[1884]: time="2025-06-20T18:25:58.584555636Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 18:25:58.584670 containerd[1884]: time="2025-06-20T18:25:58.584658164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 18:25:58.584687 containerd[1884]: time="2025-06-20T18:25:58.584673420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 18:25:58.584687 containerd[1884]: time="2025-06-20T18:25:58.584680876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 18:25:58.584713 containerd[1884]: time="2025-06-20T18:25:58.584688676Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 18:25:58.584713 containerd[1884]: time="2025-06-20T18:25:58.584696660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 18:25:58.584713 containerd[1884]: time="2025-06-20T18:25:58.584704876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 18:25:58.584713 containerd[1884]: time="2025-06-20T18:25:58.584712540Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 18:25:58.584772 containerd[1884]: time="2025-06-20T18:25:58.584731564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 18:25:58.584772 containerd[1884]: time="2025-06-20T18:25:58.584739084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 18:25:58.584772 containerd[1884]: time="2025-06-20T18:25:58.584745556Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584778796Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584789772Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584795764Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584801372Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584805796Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584815764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584823084Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584835716Z" level=info msg="runtime interface created" Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584838996Z" level=info msg="created NRI interface" Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584844300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 18:25:58.584863 containerd[1884]: time="2025-06-20T18:25:58.584851644Z" level=info msg="Connect containerd service" Jun 20 18:25:58.585236 containerd[1884]: time="2025-06-20T18:25:58.585010748Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:25:58.586446 containerd[1884]: time="2025-06-20T18:25:58.586414572Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:25:59.885062 containerd[1884]: time="2025-06-20T18:25:59.884963780Z" level=info msg="Start subscribing containerd event" Jun 20 18:25:59.885062 containerd[1884]: time="2025-06-20T18:25:59.885032620Z" level=info msg="Start recovering state" Jun 20 18:25:59.885384 containerd[1884]: time="2025-06-20T18:25:59.885117412Z" level=info msg="Start event monitor" Jun 20 18:25:59.885384 containerd[1884]: time="2025-06-20T18:25:59.885129204Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:25:59.885384 containerd[1884]: time="2025-06-20T18:25:59.885136932Z" level=info msg="Start streaming server" Jun 20 18:25:59.885384 containerd[1884]: time="2025-06-20T18:25:59.885144428Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 18:25:59.885384 containerd[1884]: time="2025-06-20T18:25:59.885149700Z" level=info msg="runtime interface starting up..." Jun 20 18:25:59.885384 containerd[1884]: time="2025-06-20T18:25:59.885153252Z" level=info msg="starting plugins..." Jun 20 18:25:59.885384 containerd[1884]: time="2025-06-20T18:25:59.885164212Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 18:25:59.887295 containerd[1884]: time="2025-06-20T18:25:59.885565292Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:25:59.887295 containerd[1884]: time="2025-06-20T18:25:59.885613956Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:25:59.887295 containerd[1884]: time="2025-06-20T18:25:59.885659692Z" level=info msg="containerd successfully booted in 1.327738s" Jun 20 18:25:59.885754 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:25:59.890745 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:25:59.896583 systemd[1]: Startup finished in 1.642s (kernel) + 18.004s (initrd) + 24.930s (userspace) = 44.578s. Jun 20 18:26:00.625975 login[2019]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jun 20 18:26:00.626286 login[2018]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:00.631446 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:26:00.632341 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:26:00.637285 systemd-logind[1867]: New session 1 of user core. Jun 20 18:26:00.689016 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:26:00.691371 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:26:00.702798 (systemd)[2066]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:26:00.704660 systemd-logind[1867]: New session c1 of user core. Jun 20 18:26:00.906412 systemd[2066]: Queued start job for default target default.target. Jun 20 18:26:00.916006 systemd[2066]: Created slice app.slice - User Application Slice. Jun 20 18:26:00.916030 systemd[2066]: Reached target paths.target - Paths. Jun 20 18:26:00.916057 systemd[2066]: Reached target timers.target - Timers. Jun 20 18:26:00.917077 systemd[2066]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:26:00.925714 systemd[2066]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:26:00.925755 systemd[2066]: Reached target sockets.target - Sockets. Jun 20 18:26:00.925783 systemd[2066]: Reached target basic.target - Basic System. Jun 20 18:26:00.925803 systemd[2066]: Reached target default.target - Main User Target. Jun 20 18:26:00.925820 systemd[2066]: Startup finished in 217ms. Jun 20 18:26:00.926092 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:26:00.928209 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:26:01.284367 waagent[2015]: 2025-06-20T18:26:01.284202Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jun 20 18:26:01.300520 waagent[2015]: 2025-06-20T18:26:01.294499Z INFO Daemon Daemon OS: flatcar 4344.1.0 Jun 20 18:26:01.300865 waagent[2015]: 2025-06-20T18:26:01.300825Z INFO Daemon Daemon Python: 3.11.12 Jun 20 18:26:01.306420 waagent[2015]: 2025-06-20T18:26:01.306375Z INFO Daemon Daemon Run daemon Jun 20 18:26:01.309885 waagent[2015]: 2025-06-20T18:26:01.309850Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.0' Jun 20 18:26:01.317206 waagent[2015]: 2025-06-20T18:26:01.317173Z INFO Daemon Daemon Using waagent for provisioning Jun 20 18:26:01.321584 waagent[2015]: 2025-06-20T18:26:01.321547Z INFO Daemon Daemon Activate resource disk Jun 20 18:26:01.325412 waagent[2015]: 2025-06-20T18:26:01.325378Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 18:26:01.334136 waagent[2015]: 2025-06-20T18:26:01.334096Z INFO Daemon Daemon Found device: None Jun 20 18:26:01.338471 waagent[2015]: 2025-06-20T18:26:01.338431Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 18:26:01.345217 waagent[2015]: 2025-06-20T18:26:01.345179Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 18:26:01.354777 waagent[2015]: 2025-06-20T18:26:01.354734Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:26:01.359282 waagent[2015]: 2025-06-20T18:26:01.359246Z INFO Daemon Daemon Running default provisioning handler Jun 20 18:26:01.368445 waagent[2015]: 2025-06-20T18:26:01.368411Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 18:26:01.378388 waagent[2015]: 2025-06-20T18:26:01.378357Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 18:26:01.386040 waagent[2015]: 2025-06-20T18:26:01.386009Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 18:26:01.390184 waagent[2015]: 2025-06-20T18:26:01.390154Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 18:26:01.563794 waagent[2015]: 2025-06-20T18:26:01.563659Z INFO Daemon Daemon Successfully mounted dvd Jun 20 18:26:01.577723 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 18:26:01.580128 waagent[2015]: 2025-06-20T18:26:01.580060Z INFO Daemon Daemon Detect protocol endpoint Jun 20 18:26:01.584832 waagent[2015]: 2025-06-20T18:26:01.584723Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:26:01.590351 waagent[2015]: 2025-06-20T18:26:01.590264Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 18:26:01.596197 waagent[2015]: 2025-06-20T18:26:01.596132Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 18:26:01.600766 waagent[2015]: 2025-06-20T18:26:01.600695Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 18:26:01.605211 waagent[2015]: 2025-06-20T18:26:01.605148Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 18:26:01.627234 login[2019]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:01.631537 systemd-logind[1867]: New session 2 of user core. Jun 20 18:26:01.641448 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:26:01.701027 waagent[2015]: 2025-06-20T18:26:01.700969Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 18:26:01.707324 waagent[2015]: 2025-06-20T18:26:01.706990Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 18:26:01.711429 waagent[2015]: 2025-06-20T18:26:01.711383Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 18:26:01.946895 waagent[2015]: 2025-06-20T18:26:01.946754Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 18:26:01.952574 waagent[2015]: 2025-06-20T18:26:01.952527Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 18:26:01.959348 waagent[2015]: 2025-06-20T18:26:01.959315Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:26:02.037797 waagent[2015]: 2025-06-20T18:26:02.037761Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 18:26:02.042121 waagent[2015]: 2025-06-20T18:26:02.042087Z INFO Daemon Jun 20 18:26:02.044214 waagent[2015]: 2025-06-20T18:26:02.044188Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 490ec470-dd49-47ba-b605-06c873530a1b eTag: 3961825504039505759 source: Fabric] Jun 20 18:26:02.063648 waagent[2015]: 2025-06-20T18:26:02.063534Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 18:26:02.068711 waagent[2015]: 2025-06-20T18:26:02.068683Z INFO Daemon Jun 20 18:26:02.070815 waagent[2015]: 2025-06-20T18:26:02.070790Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:26:02.079227 waagent[2015]: 2025-06-20T18:26:02.079201Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 18:26:02.173384 waagent[2015]: 2025-06-20T18:26:02.173330Z INFO Daemon Downloaded certificate {'thumbprint': '399330E85FE1B8E8EF4F3352A29CAA52E2BBC819', 'hasPrivateKey': True} Jun 20 18:26:02.180586 waagent[2015]: 2025-06-20T18:26:02.180550Z INFO Daemon Downloaded certificate {'thumbprint': '065A304F772B2F0EA55728150850E302F9E231B6', 'hasPrivateKey': False} Jun 20 18:26:02.187841 waagent[2015]: 2025-06-20T18:26:02.187808Z INFO Daemon Fetch goal state completed Jun 20 18:26:02.196869 waagent[2015]: 2025-06-20T18:26:02.196839Z INFO Daemon Daemon Starting provisioning Jun 20 18:26:02.201147 waagent[2015]: 2025-06-20T18:26:02.201084Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 18:26:02.204710 waagent[2015]: 2025-06-20T18:26:02.204685Z INFO Daemon Daemon Set hostname [ci-4344.1.0-a-442b0d77ef] Jun 20 18:26:02.246612 waagent[2015]: 2025-06-20T18:26:02.246547Z INFO Daemon Daemon Publish hostname [ci-4344.1.0-a-442b0d77ef] Jun 20 18:26:02.251895 waagent[2015]: 2025-06-20T18:26:02.251849Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 18:26:02.256630 waagent[2015]: 2025-06-20T18:26:02.256598Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 18:26:02.266248 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:26:02.266253 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:26:02.266284 systemd-networkd[1485]: eth0: DHCP lease lost Jun 20 18:26:02.271338 waagent[2015]: 2025-06-20T18:26:02.267193Z INFO Daemon Daemon Create user account if not exists Jun 20 18:26:02.271720 waagent[2015]: 2025-06-20T18:26:02.271683Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 18:26:02.276050 waagent[2015]: 2025-06-20T18:26:02.276019Z INFO Daemon Daemon Configure sudoer Jun 20 18:26:02.284999 waagent[2015]: 2025-06-20T18:26:02.284953Z INFO Daemon Daemon Configure sshd Jun 20 18:26:02.293158 waagent[2015]: 2025-06-20T18:26:02.293113Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 18:26:02.302976 waagent[2015]: 2025-06-20T18:26:02.302943Z INFO Daemon Daemon Deploy ssh public key. Jun 20 18:26:02.309358 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.20.17/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 20 18:26:03.429590 waagent[2015]: 2025-06-20T18:26:03.429544Z INFO Daemon Daemon Provisioning complete Jun 20 18:26:03.443844 waagent[2015]: 2025-06-20T18:26:03.443807Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 18:26:03.449359 waagent[2015]: 2025-06-20T18:26:03.449325Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 18:26:03.460849 waagent[2015]: 2025-06-20T18:26:03.460818Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jun 20 18:26:03.558076 waagent[2121]: 2025-06-20T18:26:03.557639Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jun 20 18:26:03.558076 waagent[2121]: 2025-06-20T18:26:03.557763Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.0 Jun 20 18:26:03.558076 waagent[2121]: 2025-06-20T18:26:03.557798Z INFO ExtHandler ExtHandler Python: 3.11.12 Jun 20 18:26:03.558076 waagent[2121]: 2025-06-20T18:26:03.557830Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jun 20 18:26:03.637708 waagent[2121]: 2025-06-20T18:26:03.637638Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jun 20 18:26:03.638022 waagent[2121]: 2025-06-20T18:26:03.637990Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:26:03.638152 waagent[2121]: 2025-06-20T18:26:03.638127Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:26:03.646156 waagent[2121]: 2025-06-20T18:26:03.646101Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:26:03.656328 waagent[2121]: 2025-06-20T18:26:03.655886Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 18:26:03.656328 waagent[2121]: 2025-06-20T18:26:03.656259Z INFO ExtHandler Jun 20 18:26:03.656428 waagent[2121]: 2025-06-20T18:26:03.656357Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 20ace6f0-140b-44df-810b-4b5cd13822b9 eTag: 3961825504039505759 source: Fabric] Jun 20 18:26:03.656615 waagent[2121]: 2025-06-20T18:26:03.656584Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 18:26:03.657017 waagent[2121]: 2025-06-20T18:26:03.656985Z INFO ExtHandler Jun 20 18:26:03.657054 waagent[2121]: 2025-06-20T18:26:03.657038Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:26:03.662062 waagent[2121]: 2025-06-20T18:26:03.662036Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 18:26:03.735031 waagent[2121]: 2025-06-20T18:26:03.734923Z INFO ExtHandler Downloaded certificate {'thumbprint': '399330E85FE1B8E8EF4F3352A29CAA52E2BBC819', 'hasPrivateKey': True} Jun 20 18:26:03.735271 waagent[2121]: 2025-06-20T18:26:03.735239Z INFO ExtHandler Downloaded certificate {'thumbprint': '065A304F772B2F0EA55728150850E302F9E231B6', 'hasPrivateKey': False} Jun 20 18:26:03.735600 waagent[2121]: 2025-06-20T18:26:03.735569Z INFO ExtHandler Fetch goal state completed Jun 20 18:26:03.759042 waagent[2121]: 2025-06-20T18:26:03.758993Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jun 20 18:26:03.762444 waagent[2121]: 2025-06-20T18:26:03.762398Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2121 Jun 20 18:26:03.762543 waagent[2121]: 2025-06-20T18:26:03.762519Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 18:26:03.762774 waagent[2121]: 2025-06-20T18:26:03.762747Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jun 20 18:26:03.763878 waagent[2121]: 2025-06-20T18:26:03.763843Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 18:26:03.764194 waagent[2121]: 2025-06-20T18:26:03.764164Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jun 20 18:26:03.764342 waagent[2121]: 2025-06-20T18:26:03.764282Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jun 20 18:26:03.764777 waagent[2121]: 2025-06-20T18:26:03.764748Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 18:26:03.804790 waagent[2121]: 2025-06-20T18:26:03.804756Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 18:26:03.804969 waagent[2121]: 2025-06-20T18:26:03.804941Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 18:26:03.809351 waagent[2121]: 2025-06-20T18:26:03.809094Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 18:26:03.814085 systemd[1]: Reload requested from client PID 2138 ('systemctl') (unit waagent.service)... Jun 20 18:26:03.814319 systemd[1]: Reloading... Jun 20 18:26:03.886992 zram_generator::config[2176]: No configuration found. Jun 20 18:26:03.954274 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:26:04.036196 systemd[1]: Reloading finished in 221 ms. Jun 20 18:26:04.063537 waagent[2121]: 2025-06-20T18:26:04.061492Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 18:26:04.063537 waagent[2121]: 2025-06-20T18:26:04.061624Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 18:26:04.878285 waagent[2121]: 2025-06-20T18:26:04.877364Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 18:26:04.878285 waagent[2121]: 2025-06-20T18:26:04.877739Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jun 20 18:26:04.878670 waagent[2121]: 2025-06-20T18:26:04.878544Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:26:04.878670 waagent[2121]: 2025-06-20T18:26:04.878628Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:26:04.878839 waagent[2121]: 2025-06-20T18:26:04.878801Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 18:26:04.878946 waagent[2121]: 2025-06-20T18:26:04.878900Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 18:26:04.879119 waagent[2121]: 2025-06-20T18:26:04.879085Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 18:26:04.879119 waagent[2121]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 18:26:04.879119 waagent[2121]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 18:26:04.879119 waagent[2121]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 18:26:04.879119 waagent[2121]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:26:04.879119 waagent[2121]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:26:04.879119 waagent[2121]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:26:04.879644 waagent[2121]: 2025-06-20T18:26:04.879606Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 18:26:04.879793 waagent[2121]: 2025-06-20T18:26:04.879771Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:26:04.880098 waagent[2121]: 2025-06-20T18:26:04.880059Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:26:04.880172 waagent[2121]: 2025-06-20T18:26:04.880132Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 18:26:04.880368 waagent[2121]: 2025-06-20T18:26:04.880270Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 18:26:04.880867 waagent[2121]: 2025-06-20T18:26:04.880840Z INFO EnvHandler ExtHandler Configure routes Jun 20 18:26:04.880923 waagent[2121]: 2025-06-20T18:26:04.880746Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 18:26:04.881064 waagent[2121]: 2025-06-20T18:26:04.881037Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 18:26:04.881169 waagent[2121]: 2025-06-20T18:26:04.881148Z INFO EnvHandler ExtHandler Gateway:None Jun 20 18:26:04.881245 waagent[2121]: 2025-06-20T18:26:04.881218Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 18:26:04.882374 waagent[2121]: 2025-06-20T18:26:04.882346Z INFO EnvHandler ExtHandler Routes:None Jun 20 18:26:04.893765 waagent[2121]: 2025-06-20T18:26:04.893707Z INFO ExtHandler ExtHandler Jun 20 18:26:04.893875 waagent[2121]: 2025-06-20T18:26:04.893803Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f89d02e5-646b-4ba5-8954-e67565d2362c correlation e7a82b9a-6c1d-431c-9dce-450e31a044d7 created: 2025-06-20T18:23:49.225418Z] Jun 20 18:26:04.894166 waagent[2121]: 2025-06-20T18:26:04.894127Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 18:26:04.894641 waagent[2121]: 2025-06-20T18:26:04.894611Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jun 20 18:26:05.032025 waagent[2121]: 2025-06-20T18:26:05.031943Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jun 20 18:26:05.032025 waagent[2121]: Try `iptables -h' or 'iptables --help' for more information.) Jun 20 18:26:05.032507 waagent[2121]: 2025-06-20T18:26:05.032472Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3EE1D06E-0254-4D7C-A361-51172115DDB4;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jun 20 18:26:05.105350 waagent[2121]: 2025-06-20T18:26:05.105272Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 18:26:05.105350 waagent[2121]: Executing ['ip', '-a', '-o', 'link']: Jun 20 18:26:05.105350 waagent[2121]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 18:26:05.105350 waagent[2121]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c0:fc:7c brd ff:ff:ff:ff:ff:ff Jun 20 18:26:05.105350 waagent[2121]: 3: enP14819s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:c0:fc:7c brd ff:ff:ff:ff:ff:ff\ altname enP14819p0s2 Jun 20 18:26:05.105350 waagent[2121]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 18:26:05.105350 waagent[2121]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 18:26:05.105350 waagent[2121]: 2: eth0 inet 10.200.20.17/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 18:26:05.105350 waagent[2121]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 18:26:05.105350 waagent[2121]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 18:26:05.105350 waagent[2121]: 2: eth0 inet6 fe80::222:48ff:fec0:fc7c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:26:05.105350 waagent[2121]: 3: enP14819s1 inet6 fe80::222:48ff:fec0:fc7c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:26:05.676938 waagent[2121]: 2025-06-20T18:26:05.676870Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jun 20 18:26:05.676938 waagent[2121]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:26:05.676938 waagent[2121]: pkts bytes target prot opt in out source destination Jun 20 18:26:05.676938 waagent[2121]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:26:05.676938 waagent[2121]: pkts bytes target prot opt in out source destination Jun 20 18:26:05.676938 waagent[2121]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:26:05.676938 waagent[2121]: pkts bytes target prot opt in out source destination Jun 20 18:26:05.676938 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:26:05.676938 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:26:05.676938 waagent[2121]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:26:05.679267 waagent[2121]: 2025-06-20T18:26:05.679223Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 18:26:05.679267 waagent[2121]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:26:05.679267 waagent[2121]: pkts bytes target prot opt in out source destination Jun 20 18:26:05.679267 waagent[2121]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:26:05.679267 waagent[2121]: pkts bytes target prot opt in out source destination Jun 20 18:26:05.679267 waagent[2121]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:26:05.679267 waagent[2121]: pkts bytes target prot opt in out source destination Jun 20 18:26:05.679267 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:26:05.679267 waagent[2121]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:26:05.679267 waagent[2121]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:26:05.679475 waagent[2121]: 2025-06-20T18:26:05.679461Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 20 18:26:08.614871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:26:08.616435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:08.744213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:08.752710 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:26:08.862999 kubelet[2272]: E0620 18:26:08.862875 2272 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:26:08.865909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:26:08.866030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:26:08.866412 systemd[1]: kubelet.service: Consumed 110ms CPU time, 107.7M memory peak. Jun 20 18:26:19.074594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:26:19.075986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:19.173171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:19.175489 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:26:19.303254 kubelet[2287]: E0620 18:26:19.303199 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:26:19.305797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:26:19.306018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:26:19.306616 systemd[1]: kubelet.service: Consumed 104ms CPU time, 106.5M memory peak. Jun 20 18:26:21.152206 chronyd[1864]: Selected source PHC0 Jun 20 18:26:29.324637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:26:29.326031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:29.446187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:29.448647 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:26:29.554630 kubelet[2302]: E0620 18:26:29.554571 2302 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:26:29.556868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:26:29.557059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:26:29.558433 systemd[1]: kubelet.service: Consumed 182ms CPU time, 106.9M memory peak. Jun 20 18:26:35.080851 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 20 18:26:36.180159 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:26:36.181194 systemd[1]: Started sshd@0-10.200.20.17:22-10.200.16.10:37518.service - OpenSSH per-connection server daemon (10.200.16.10:37518). Jun 20 18:26:36.928794 sshd[2310]: Accepted publickey for core from 10.200.16.10 port 37518 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:36.929970 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:36.933786 systemd-logind[1867]: New session 3 of user core. Jun 20 18:26:36.944417 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:26:37.371483 systemd[1]: Started sshd@1-10.200.20.17:22-10.200.16.10:37532.service - OpenSSH per-connection server daemon (10.200.16.10:37532). Jun 20 18:26:37.827914 sshd[2315]: Accepted publickey for core from 10.200.16.10 port 37532 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:37.829149 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:37.832705 systemd-logind[1867]: New session 4 of user core. Jun 20 18:26:37.839487 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:26:38.161354 sshd[2317]: Connection closed by 10.200.16.10 port 37532 Jun 20 18:26:38.161873 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:38.165734 systemd[1]: sshd@1-10.200.20.17:22-10.200.16.10:37532.service: Deactivated successfully. Jun 20 18:26:38.167046 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:26:38.168010 systemd-logind[1867]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:26:38.169071 systemd-logind[1867]: Removed session 4. Jun 20 18:26:38.249932 systemd[1]: Started sshd@2-10.200.20.17:22-10.200.16.10:37546.service - OpenSSH per-connection server daemon (10.200.16.10:37546). Jun 20 18:26:38.730813 sshd[2323]: Accepted publickey for core from 10.200.16.10 port 37546 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:38.732005 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:38.735646 systemd-logind[1867]: New session 5 of user core. Jun 20 18:26:38.743402 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:26:39.083330 sshd[2325]: Connection closed by 10.200.16.10 port 37546 Jun 20 18:26:39.083821 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:39.086809 systemd-logind[1867]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:26:39.087398 systemd[1]: sshd@2-10.200.20.17:22-10.200.16.10:37546.service: Deactivated successfully. Jun 20 18:26:39.089091 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:26:39.090358 systemd-logind[1867]: Removed session 5. Jun 20 18:26:39.169727 systemd[1]: Started sshd@3-10.200.20.17:22-10.200.16.10:60402.service - OpenSSH per-connection server daemon (10.200.16.10:60402). Jun 20 18:26:39.574628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 18:26:39.577448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:39.661980 sshd[2331]: Accepted publickey for core from 10.200.16.10 port 60402 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:39.663548 sshd-session[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:39.667851 systemd-logind[1867]: New session 6 of user core. Jun 20 18:26:39.669073 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:26:39.676216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:39.678670 (kubelet)[2342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:26:39.806083 kubelet[2342]: E0620 18:26:39.806028 2342 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:26:39.808236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:26:39.808366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:26:39.808850 systemd[1]: kubelet.service: Consumed 202ms CPU time, 106.5M memory peak. Jun 20 18:26:40.022318 sshd[2340]: Connection closed by 10.200.16.10 port 60402 Jun 20 18:26:40.023034 sshd-session[2331]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:40.026523 systemd[1]: sshd@3-10.200.20.17:22-10.200.16.10:60402.service: Deactivated successfully. Jun 20 18:26:40.027941 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:26:40.028547 systemd-logind[1867]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:26:40.029868 systemd-logind[1867]: Removed session 6. Jun 20 18:26:40.107867 systemd[1]: Started sshd@4-10.200.20.17:22-10.200.16.10:60414.service - OpenSSH per-connection server daemon (10.200.16.10:60414). Jun 20 18:26:40.566008 sshd[2354]: Accepted publickey for core from 10.200.16.10 port 60414 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:40.567184 sshd-session[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:40.571152 systemd-logind[1867]: New session 7 of user core. Jun 20 18:26:40.577415 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:26:41.050708 sudo[2357]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:26:41.050928 sudo[2357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:26:41.095999 sudo[2357]: pam_unix(sudo:session): session closed for user root Jun 20 18:26:41.168669 sshd[2356]: Connection closed by 10.200.16.10 port 60414 Jun 20 18:26:41.167863 sshd-session[2354]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:41.171102 systemd-logind[1867]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:26:41.171459 systemd[1]: sshd@4-10.200.20.17:22-10.200.16.10:60414.service: Deactivated successfully. Jun 20 18:26:41.172937 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:26:41.175205 systemd-logind[1867]: Removed session 7. Jun 20 18:26:41.253100 systemd[1]: Started sshd@5-10.200.20.17:22-10.200.16.10:60418.service - OpenSSH per-connection server daemon (10.200.16.10:60418). Jun 20 18:26:41.731809 sshd[2363]: Accepted publickey for core from 10.200.16.10 port 60418 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:41.733080 sshd-session[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:41.737139 systemd-logind[1867]: New session 8 of user core. Jun 20 18:26:41.744436 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:26:41.998213 sudo[2367]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:26:41.998904 sudo[2367]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:26:42.005839 sudo[2367]: pam_unix(sudo:session): session closed for user root Jun 20 18:26:42.009431 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:26:42.009628 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:26:42.016409 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:26:42.045116 augenrules[2389]: No rules Jun 20 18:26:42.046165 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:26:42.046353 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:26:42.047407 sudo[2366]: pam_unix(sudo:session): session closed for user root Jun 20 18:26:42.136765 sshd[2365]: Connection closed by 10.200.16.10 port 60418 Jun 20 18:26:42.137604 sshd-session[2363]: pam_unix(sshd:session): session closed for user core Jun 20 18:26:42.141155 systemd-logind[1867]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:26:42.141782 systemd[1]: sshd@5-10.200.20.17:22-10.200.16.10:60418.service: Deactivated successfully. Jun 20 18:26:42.144640 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:26:42.145909 systemd-logind[1867]: Removed session 8. Jun 20 18:26:42.225936 systemd[1]: Started sshd@6-10.200.20.17:22-10.200.16.10:60422.service - OpenSSH per-connection server daemon (10.200.16.10:60422). Jun 20 18:26:42.684799 sshd[2398]: Accepted publickey for core from 10.200.16.10 port 60422 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:26:42.685938 sshd-session[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:26:42.689459 systemd-logind[1867]: New session 9 of user core. Jun 20 18:26:42.696571 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:26:42.941327 sudo[2401]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:26:42.941554 sudo[2401]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:26:43.319320 update_engine[1874]: I20250620 18:26:43.319048 1874 update_attempter.cc:509] Updating boot flags... Jun 20 18:26:45.066099 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:26:45.076539 (dockerd)[2482]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:26:46.761002 dockerd[2482]: time="2025-06-20T18:26:46.760497329Z" level=info msg="Starting up" Jun 20 18:26:46.762260 dockerd[2482]: time="2025-06-20T18:26:46.762234922Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 18:26:46.877751 dockerd[2482]: time="2025-06-20T18:26:46.877709751Z" level=info msg="Loading containers: start." Jun 20 18:26:46.893472 kernel: Initializing XFRM netlink socket Jun 20 18:26:47.366823 systemd-networkd[1485]: docker0: Link UP Jun 20 18:26:47.410261 dockerd[2482]: time="2025-06-20T18:26:47.410166116Z" level=info msg="Loading containers: done." Jun 20 18:26:47.420326 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3614540446-merged.mount: Deactivated successfully. Jun 20 18:26:47.439417 dockerd[2482]: time="2025-06-20T18:26:47.439312682Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:26:47.439417 dockerd[2482]: time="2025-06-20T18:26:47.439418286Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 18:26:47.439536 dockerd[2482]: time="2025-06-20T18:26:47.439519507Z" level=info msg="Initializing buildkit" Jun 20 18:26:47.503319 dockerd[2482]: time="2025-06-20T18:26:47.503173890Z" level=info msg="Completed buildkit initialization" Jun 20 18:26:47.508561 dockerd[2482]: time="2025-06-20T18:26:47.508522604Z" level=info msg="Daemon has completed initialization" Jun 20 18:26:47.509133 dockerd[2482]: time="2025-06-20T18:26:47.508572246Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:26:47.508816 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:26:48.042149 containerd[1884]: time="2025-06-20T18:26:48.042111615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 18:26:49.043951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161172118.mount: Deactivated successfully. Jun 20 18:26:49.824439 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 18:26:49.826251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:26:49.927224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:26:49.929740 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:26:49.954075 kubelet[2720]: E0620 18:26:49.954020 2720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:26:49.956325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:26:49.956546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:26:49.957076 systemd[1]: kubelet.service: Consumed 102ms CPU time, 104.4M memory peak. Jun 20 18:26:51.137270 containerd[1884]: time="2025-06-20T18:26:51.136659160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:51.147958 containerd[1884]: time="2025-06-20T18:26:51.147930055Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351716" Jun 20 18:26:51.152533 containerd[1884]: time="2025-06-20T18:26:51.152493252Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:51.157975 containerd[1884]: time="2025-06-20T18:26:51.157942641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:51.158381 containerd[1884]: time="2025-06-20T18:26:51.158306310Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 3.116158189s" Jun 20 18:26:51.158381 containerd[1884]: time="2025-06-20T18:26:51.158333431Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jun 20 18:26:51.159604 containerd[1884]: time="2025-06-20T18:26:51.159456239Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 18:26:52.803327 containerd[1884]: time="2025-06-20T18:26:52.802805702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:52.808609 containerd[1884]: time="2025-06-20T18:26:52.808581662Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537623" Jun 20 18:26:52.813533 containerd[1884]: time="2025-06-20T18:26:52.813512496Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:52.823035 containerd[1884]: time="2025-06-20T18:26:52.823010671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:52.823598 containerd[1884]: time="2025-06-20T18:26:52.823452879Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.663972735s" Jun 20 18:26:52.823598 containerd[1884]: time="2025-06-20T18:26:52.823481848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jun 20 18:26:52.823963 containerd[1884]: time="2025-06-20T18:26:52.823859342Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 18:26:54.475853 containerd[1884]: time="2025-06-20T18:26:54.475641742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:54.482084 containerd[1884]: time="2025-06-20T18:26:54.482048913Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293515" Jun 20 18:26:54.489152 containerd[1884]: time="2025-06-20T18:26:54.489104483Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:54.496322 containerd[1884]: time="2025-06-20T18:26:54.495995616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:54.496476 containerd[1884]: time="2025-06-20T18:26:54.496455897Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.672570451s" Jun 20 18:26:54.496538 containerd[1884]: time="2025-06-20T18:26:54.496525788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jun 20 18:26:54.497198 containerd[1884]: time="2025-06-20T18:26:54.497173835Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 18:26:55.690269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083993117.mount: Deactivated successfully. Jun 20 18:26:56.012684 containerd[1884]: time="2025-06-20T18:26:56.012554551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:56.017782 containerd[1884]: time="2025-06-20T18:26:56.017626433Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199472" Jun 20 18:26:56.021601 containerd[1884]: time="2025-06-20T18:26:56.021575770Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:56.025963 containerd[1884]: time="2025-06-20T18:26:56.025933530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:56.026522 containerd[1884]: time="2025-06-20T18:26:56.026177987Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.528979767s" Jun 20 18:26:56.026522 containerd[1884]: time="2025-06-20T18:26:56.026205724Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jun 20 18:26:56.026676 containerd[1884]: time="2025-06-20T18:26:56.026655236Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 18:26:56.913528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2407170226.mount: Deactivated successfully. Jun 20 18:26:58.649901 containerd[1884]: time="2025-06-20T18:26:58.649842016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:58.654077 containerd[1884]: time="2025-06-20T18:26:58.654035594Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jun 20 18:26:58.659426 containerd[1884]: time="2025-06-20T18:26:58.659378078Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:58.664107 containerd[1884]: time="2025-06-20T18:26:58.664056913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:26:58.664956 containerd[1884]: time="2025-06-20T18:26:58.664650639Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.637971058s" Jun 20 18:26:58.664956 containerd[1884]: time="2025-06-20T18:26:58.664680408Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jun 20 18:26:58.665121 containerd[1884]: time="2025-06-20T18:26:58.665090383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:26:59.283173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971777595.mount: Deactivated successfully. Jun 20 18:26:59.325338 containerd[1884]: time="2025-06-20T18:26:59.324976584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:26:59.328668 containerd[1884]: time="2025-06-20T18:26:59.328630550Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jun 20 18:26:59.335707 containerd[1884]: time="2025-06-20T18:26:59.335659832Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:26:59.341524 containerd[1884]: time="2025-06-20T18:26:59.341475181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:26:59.342158 containerd[1884]: time="2025-06-20T18:26:59.341807297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 676.685545ms" Jun 20 18:26:59.342158 containerd[1884]: time="2025-06-20T18:26:59.341837410Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jun 20 18:26:59.342337 containerd[1884]: time="2025-06-20T18:26:59.342314716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 18:27:00.074463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 20 18:27:00.076094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:27:00.175182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:27:00.177672 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:27:00.263809 kubelet[2832]: E0620 18:27:00.263751 2832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:27:00.266533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:27:00.266752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:27:00.267191 systemd[1]: kubelet.service: Consumed 100ms CPU time, 105.2M memory peak. Jun 20 18:27:00.713414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809186681.mount: Deactivated successfully. Jun 20 18:27:03.667418 containerd[1884]: time="2025-06-20T18:27:03.667369038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:03.672124 containerd[1884]: time="2025-06-20T18:27:03.672088669Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Jun 20 18:27:03.677241 containerd[1884]: time="2025-06-20T18:27:03.677192475Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:03.681873 containerd[1884]: time="2025-06-20T18:27:03.681811479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:03.682611 containerd[1884]: time="2025-06-20T18:27:03.682436846Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.340096154s" Jun 20 18:27:03.682611 containerd[1884]: time="2025-06-20T18:27:03.682464399Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jun 20 18:27:06.839930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:27:06.840365 systemd[1]: kubelet.service: Consumed 100ms CPU time, 105.2M memory peak. Jun 20 18:27:06.842422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:27:06.861002 systemd[1]: Reload requested from client PID 2921 ('systemctl') (unit session-9.scope)... Jun 20 18:27:06.861135 systemd[1]: Reloading... Jun 20 18:27:06.948469 zram_generator::config[2967]: No configuration found. Jun 20 18:27:07.028303 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:27:07.111382 systemd[1]: Reloading finished in 249 ms. Jun 20 18:27:07.144651 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 18:27:07.144714 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 18:27:07.144914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:27:07.144953 systemd[1]: kubelet.service: Consumed 73ms CPU time, 95M memory peak. Jun 20 18:27:07.146104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:27:07.356352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:27:07.360716 (kubelet)[3034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:27:07.386435 kubelet[3034]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:27:07.386435 kubelet[3034]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:27:07.386435 kubelet[3034]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:27:07.386435 kubelet[3034]: I0620 18:27:07.386231 3034 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:27:07.845314 kubelet[3034]: I0620 18:27:07.844676 3034 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:27:07.845314 kubelet[3034]: I0620 18:27:07.844705 3034 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:27:07.845314 kubelet[3034]: I0620 18:27:07.844882 3034 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:27:07.858813 kubelet[3034]: E0620 18:27:07.858778 3034 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:27:07.860478 kubelet[3034]: I0620 18:27:07.860455 3034 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:27:07.866573 kubelet[3034]: I0620 18:27:07.866559 3034 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:27:07.869031 kubelet[3034]: I0620 18:27:07.869009 3034 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:27:07.869893 kubelet[3034]: I0620 18:27:07.869859 3034 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:27:07.870089 kubelet[3034]: I0620 18:27:07.869967 3034 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-442b0d77ef","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:27:07.870203 kubelet[3034]: I0620 18:27:07.870193 3034 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:27:07.870247 kubelet[3034]: I0620 18:27:07.870241 3034 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:27:07.870429 kubelet[3034]: I0620 18:27:07.870416 3034 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:27:07.872167 kubelet[3034]: I0620 18:27:07.872150 3034 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:27:07.872254 kubelet[3034]: I0620 18:27:07.872244 3034 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:27:07.872342 kubelet[3034]: I0620 18:27:07.872333 3034 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:27:07.872404 kubelet[3034]: I0620 18:27:07.872397 3034 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:27:07.874570 kubelet[3034]: E0620 18:27:07.874541 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-442b0d77ef&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:27:07.876325 kubelet[3034]: E0620 18:27:07.876036 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:27:07.876325 kubelet[3034]: I0620 18:27:07.876128 3034 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:27:07.876544 kubelet[3034]: I0620 18:27:07.876522 3034 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:27:07.876582 kubelet[3034]: W0620 18:27:07.876570 3034 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:27:07.879157 kubelet[3034]: I0620 18:27:07.878502 3034 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:27:07.879157 kubelet[3034]: I0620 18:27:07.878538 3034 server.go:1289] "Started kubelet" Jun 20 18:27:07.879157 kubelet[3034]: I0620 18:27:07.878758 3034 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:27:07.879346 kubelet[3034]: I0620 18:27:07.879330 3034 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:27:07.880778 kubelet[3034]: I0620 18:27:07.880327 3034 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:27:07.880778 kubelet[3034]: I0620 18:27:07.880614 3034 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:27:07.881371 kubelet[3034]: E0620 18:27:07.880708 3034 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.0-a-442b0d77ef.184ad391b0e72c56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-442b0d77ef,UID:ci-4344.1.0-a-442b0d77ef,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-442b0d77ef,},FirstTimestamp:2025-06-20 18:27:07.878517846 +0000 UTC m=+0.514681509,LastTimestamp:2025-06-20 18:27:07.878517846 +0000 UTC m=+0.514681509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-442b0d77ef,}" Jun 20 18:27:07.883180 kubelet[3034]: I0620 18:27:07.883150 3034 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:27:07.884120 kubelet[3034]: E0620 18:27:07.884103 3034 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:27:07.884609 kubelet[3034]: I0620 18:27:07.884595 3034 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:27:07.887156 kubelet[3034]: E0620 18:27:07.887123 3034 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-442b0d77ef\" not found" Jun 20 18:27:07.887738 kubelet[3034]: I0620 18:27:07.887274 3034 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:27:07.887738 kubelet[3034]: I0620 18:27:07.887466 3034 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:27:07.887738 kubelet[3034]: I0620 18:27:07.887517 3034 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:27:07.888619 kubelet[3034]: E0620 18:27:07.888595 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:27:07.888887 kubelet[3034]: I0620 18:27:07.888871 3034 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:27:07.889028 kubelet[3034]: I0620 18:27:07.889009 3034 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:27:07.889646 kubelet[3034]: I0620 18:27:07.889533 3034 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:27:07.892108 kubelet[3034]: E0620 18:27:07.892084 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-442b0d77ef?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="200ms" Jun 20 18:27:07.895903 kubelet[3034]: I0620 18:27:07.894611 3034 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:27:07.899235 kubelet[3034]: I0620 18:27:07.899214 3034 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:27:07.899567 kubelet[3034]: I0620 18:27:07.899327 3034 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:27:07.899567 kubelet[3034]: I0620 18:27:07.899351 3034 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:27:07.899567 kubelet[3034]: I0620 18:27:07.899357 3034 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:27:07.899567 kubelet[3034]: E0620 18:27:07.899392 3034 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:27:07.902240 kubelet[3034]: E0620 18:27:07.902223 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:27:07.917185 kubelet[3034]: I0620 18:27:07.917120 3034 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:27:07.917185 kubelet[3034]: I0620 18:27:07.917133 3034 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:27:07.917185 kubelet[3034]: I0620 18:27:07.917148 3034 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:27:07.923690 kubelet[3034]: I0620 18:27:07.923668 3034 policy_none.go:49] "None policy: Start" Jun 20 18:27:07.923800 kubelet[3034]: I0620 18:27:07.923790 3034 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:27:07.923850 kubelet[3034]: I0620 18:27:07.923843 3034 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:27:07.986331 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:27:07.988307 kubelet[3034]: E0620 18:27:07.987721 3034 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.0-a-442b0d77ef\" not found" Jun 20 18:27:08.000298 kubelet[3034]: E0620 18:27:07.999623 3034 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 18:27:07.999720 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:27:08.003580 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:27:08.023667 kubelet[3034]: E0620 18:27:08.023131 3034 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:27:08.023667 kubelet[3034]: I0620 18:27:08.023356 3034 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:27:08.023667 kubelet[3034]: I0620 18:27:08.023367 3034 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:27:08.024321 kubelet[3034]: I0620 18:27:08.024307 3034 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:27:08.025483 kubelet[3034]: E0620 18:27:08.025467 3034 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:27:08.025590 kubelet[3034]: E0620 18:27:08.025579 3034 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.0-a-442b0d77ef\" not found" Jun 20 18:27:08.092761 kubelet[3034]: E0620 18:27:08.092719 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-442b0d77ef?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="400ms" Jun 20 18:27:08.126210 kubelet[3034]: I0620 18:27:08.125431 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.126331 kubelet[3034]: E0620 18:27:08.126272 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.288877 kubelet[3034]: I0620 18:27:08.288807 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c4185d8791e65ad157bfb2dc613991f-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-442b0d77ef\" (UID: \"1c4185d8791e65ad157bfb2dc613991f\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.328258 kubelet[3034]: I0620 18:27:08.328234 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.328750 kubelet[3034]: E0620 18:27:08.328727 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.494086 kubelet[3034]: E0620 18:27:08.493956 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-442b0d77ef?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="800ms" Jun 20 18:27:08.731072 kubelet[3034]: I0620 18:27:08.730809 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.731180 kubelet[3034]: E0620 18:27:08.731107 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.17:6443/api/v1/nodes\": dial tcp 10.200.20.17:6443: connect: connection refused" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.771053 systemd[1]: Created slice kubepods-burstable-pod1c4185d8791e65ad157bfb2dc613991f.slice - libcontainer container kubepods-burstable-pod1c4185d8791e65ad157bfb2dc613991f.slice. Jun 20 18:27:08.785736 kubelet[3034]: E0620 18:27:08.785278 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-442b0d77ef\" not found" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.786094 containerd[1884]: time="2025-06-20T18:27:08.786064682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-442b0d77ef,Uid:1c4185d8791e65ad157bfb2dc613991f,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:08.789514 systemd[1]: Created slice kubepods-burstable-podf502ddef34e93a693ec03cb7f1b73f4d.slice - libcontainer container kubepods-burstable-podf502ddef34e93a693ec03cb7f1b73f4d.slice. Jun 20 18:27:08.790382 kubelet[3034]: I0620 18:27:08.790319 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.790382 kubelet[3034]: I0620 18:27:08.790347 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f502ddef34e93a693ec03cb7f1b73f4d-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" (UID: \"f502ddef34e93a693ec03cb7f1b73f4d\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.790382 kubelet[3034]: I0620 18:27:08.790359 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f502ddef34e93a693ec03cb7f1b73f4d-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" (UID: \"f502ddef34e93a693ec03cb7f1b73f4d\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.790577 kubelet[3034]: I0620 18:27:08.790371 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f502ddef34e93a693ec03cb7f1b73f4d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" (UID: \"f502ddef34e93a693ec03cb7f1b73f4d\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.790577 kubelet[3034]: I0620 18:27:08.790527 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.790577 kubelet[3034]: I0620 18:27:08.790540 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.790577 kubelet[3034]: I0620 18:27:08.790557 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.790754 kubelet[3034]: I0620 18:27:08.790687 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.791119 kubelet[3034]: E0620 18:27:08.791096 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-442b0d77ef\" not found" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.801821 systemd[1]: Created slice kubepods-burstable-pod8697989381433c218a8e723bdd957b24.slice - libcontainer container kubepods-burstable-pod8697989381433c218a8e723bdd957b24.slice. Jun 20 18:27:08.803124 kubelet[3034]: E0620 18:27:08.803103 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-442b0d77ef\" not found" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:08.847575 containerd[1884]: time="2025-06-20T18:27:08.847500544Z" level=info msg="connecting to shim 0c48d61bf9bbe630d07309ba85813dc1f70931966233264e2239c5ac020e1288" address="unix:///run/containerd/s/17fbd76758e2637f8ec6406316e16625496a3a4c410623d845048d6a04667850" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:08.867433 systemd[1]: Started cri-containerd-0c48d61bf9bbe630d07309ba85813dc1f70931966233264e2239c5ac020e1288.scope - libcontainer container 0c48d61bf9bbe630d07309ba85813dc1f70931966233264e2239c5ac020e1288. Jun 20 18:27:08.872878 kubelet[3034]: E0620 18:27:08.872845 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:27:08.902750 containerd[1884]: time="2025-06-20T18:27:08.902619035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.0-a-442b0d77ef,Uid:1c4185d8791e65ad157bfb2dc613991f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c48d61bf9bbe630d07309ba85813dc1f70931966233264e2239c5ac020e1288\"" Jun 20 18:27:08.911832 containerd[1884]: time="2025-06-20T18:27:08.911791544Z" level=info msg="CreateContainer within sandbox \"0c48d61bf9bbe630d07309ba85813dc1f70931966233264e2239c5ac020e1288\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:27:08.948874 containerd[1884]: time="2025-06-20T18:27:08.948829802Z" level=info msg="Container 090790578ada285652a395da7830d85d706cf568157917fe2264acf6c47fe81f: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:08.952005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531257677.mount: Deactivated successfully. Jun 20 18:27:08.960869 kubelet[3034]: E0620 18:27:08.960833 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.0-a-442b0d77ef&limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:27:08.977762 containerd[1884]: time="2025-06-20T18:27:08.977680259Z" level=info msg="CreateContainer within sandbox \"0c48d61bf9bbe630d07309ba85813dc1f70931966233264e2239c5ac020e1288\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"090790578ada285652a395da7830d85d706cf568157917fe2264acf6c47fe81f\"" Jun 20 18:27:08.979269 containerd[1884]: time="2025-06-20T18:27:08.978447312Z" level=info msg="StartContainer for \"090790578ada285652a395da7830d85d706cf568157917fe2264acf6c47fe81f\"" Jun 20 18:27:08.979432 containerd[1884]: time="2025-06-20T18:27:08.979410948Z" level=info msg="connecting to shim 090790578ada285652a395da7830d85d706cf568157917fe2264acf6c47fe81f" address="unix:///run/containerd/s/17fbd76758e2637f8ec6406316e16625496a3a4c410623d845048d6a04667850" protocol=ttrpc version=3 Jun 20 18:27:08.993580 systemd[1]: Started cri-containerd-090790578ada285652a395da7830d85d706cf568157917fe2264acf6c47fe81f.scope - libcontainer container 090790578ada285652a395da7830d85d706cf568157917fe2264acf6c47fe81f. Jun 20 18:27:09.024986 containerd[1884]: time="2025-06-20T18:27:09.024878695Z" level=info msg="StartContainer for \"090790578ada285652a395da7830d85d706cf568157917fe2264acf6c47fe81f\" returns successfully" Jun 20 18:27:09.069971 kubelet[3034]: E0620 18:27:09.069917 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:27:09.090742 kubelet[3034]: E0620 18:27:09.090707 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:27:09.092316 containerd[1884]: time="2025-06-20T18:27:09.092265674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-442b0d77ef,Uid:f502ddef34e93a693ec03cb7f1b73f4d,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:09.104903 containerd[1884]: time="2025-06-20T18:27:09.104784116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-442b0d77ef,Uid:8697989381433c218a8e723bdd957b24,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:09.192651 containerd[1884]: time="2025-06-20T18:27:09.192573206Z" level=info msg="connecting to shim e75b36a0b80e41538a77f5054ccf82468cf8eaedc009cd1065aea90f56301cf3" address="unix:///run/containerd/s/d4cca9bca04e9930e732b2ee87bde73f897bce383ae974a4d5419c41ba36fbe3" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:09.211407 containerd[1884]: time="2025-06-20T18:27:09.211298159Z" level=info msg="connecting to shim 2cc370bee5794c69b8d5ec3dd9e612c7d616449ded1e3b0528b69bceccc9edb3" address="unix:///run/containerd/s/77f438f27df99dc2d9637b55d4b96b924908d87f56af059c5fd28c81110a8203" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:09.211437 systemd[1]: Started cri-containerd-e75b36a0b80e41538a77f5054ccf82468cf8eaedc009cd1065aea90f56301cf3.scope - libcontainer container e75b36a0b80e41538a77f5054ccf82468cf8eaedc009cd1065aea90f56301cf3. Jun 20 18:27:09.236411 systemd[1]: Started cri-containerd-2cc370bee5794c69b8d5ec3dd9e612c7d616449ded1e3b0528b69bceccc9edb3.scope - libcontainer container 2cc370bee5794c69b8d5ec3dd9e612c7d616449ded1e3b0528b69bceccc9edb3. Jun 20 18:27:09.262885 containerd[1884]: time="2025-06-20T18:27:09.261650718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.0-a-442b0d77ef,Uid:f502ddef34e93a693ec03cb7f1b73f4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e75b36a0b80e41538a77f5054ccf82468cf8eaedc009cd1065aea90f56301cf3\"" Jun 20 18:27:09.271455 containerd[1884]: time="2025-06-20T18:27:09.271421848Z" level=info msg="CreateContainer within sandbox \"e75b36a0b80e41538a77f5054ccf82468cf8eaedc009cd1065aea90f56301cf3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:27:09.295399 kubelet[3034]: E0620 18:27:09.294936 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.0-a-442b0d77ef?timeout=10s\": dial tcp 10.200.20.17:6443: connect: connection refused" interval="1.6s" Jun 20 18:27:09.298605 containerd[1884]: time="2025-06-20T18:27:09.298577929Z" level=info msg="Container 6ff240193960156e5c8b2dc79389b64f90068443bc73ff864c33ebfb946fb177: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:09.302867 containerd[1884]: time="2025-06-20T18:27:09.302616403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.0-a-442b0d77ef,Uid:8697989381433c218a8e723bdd957b24,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cc370bee5794c69b8d5ec3dd9e612c7d616449ded1e3b0528b69bceccc9edb3\"" Jun 20 18:27:09.313094 containerd[1884]: time="2025-06-20T18:27:09.312513114Z" level=info msg="CreateContainer within sandbox \"2cc370bee5794c69b8d5ec3dd9e612c7d616449ded1e3b0528b69bceccc9edb3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:27:09.322339 containerd[1884]: time="2025-06-20T18:27:09.322307685Z" level=info msg="CreateContainer within sandbox \"e75b36a0b80e41538a77f5054ccf82468cf8eaedc009cd1065aea90f56301cf3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ff240193960156e5c8b2dc79389b64f90068443bc73ff864c33ebfb946fb177\"" Jun 20 18:27:09.322842 containerd[1884]: time="2025-06-20T18:27:09.322818720Z" level=info msg="StartContainer for \"6ff240193960156e5c8b2dc79389b64f90068443bc73ff864c33ebfb946fb177\"" Jun 20 18:27:09.323922 containerd[1884]: time="2025-06-20T18:27:09.323607836Z" level=info msg="connecting to shim 6ff240193960156e5c8b2dc79389b64f90068443bc73ff864c33ebfb946fb177" address="unix:///run/containerd/s/d4cca9bca04e9930e732b2ee87bde73f897bce383ae974a4d5419c41ba36fbe3" protocol=ttrpc version=3 Jun 20 18:27:09.339442 systemd[1]: Started cri-containerd-6ff240193960156e5c8b2dc79389b64f90068443bc73ff864c33ebfb946fb177.scope - libcontainer container 6ff240193960156e5c8b2dc79389b64f90068443bc73ff864c33ebfb946fb177. Jun 20 18:27:09.348277 containerd[1884]: time="2025-06-20T18:27:09.347777000Z" level=info msg="Container c13afd69c730da0969624a24704e1b684fa1e1dfeb70c4c6e4ddf38a8c9b36b4: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:09.375264 containerd[1884]: time="2025-06-20T18:27:09.375229388Z" level=info msg="StartContainer for \"6ff240193960156e5c8b2dc79389b64f90068443bc73ff864c33ebfb946fb177\" returns successfully" Jun 20 18:27:09.377516 containerd[1884]: time="2025-06-20T18:27:09.377406643Z" level=info msg="CreateContainer within sandbox \"2cc370bee5794c69b8d5ec3dd9e612c7d616449ded1e3b0528b69bceccc9edb3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c13afd69c730da0969624a24704e1b684fa1e1dfeb70c4c6e4ddf38a8c9b36b4\"" Jun 20 18:27:09.378136 containerd[1884]: time="2025-06-20T18:27:09.378116917Z" level=info msg="StartContainer for \"c13afd69c730da0969624a24704e1b684fa1e1dfeb70c4c6e4ddf38a8c9b36b4\"" Jun 20 18:27:09.379608 containerd[1884]: time="2025-06-20T18:27:09.379585050Z" level=info msg="connecting to shim c13afd69c730da0969624a24704e1b684fa1e1dfeb70c4c6e4ddf38a8c9b36b4" address="unix:///run/containerd/s/77f438f27df99dc2d9637b55d4b96b924908d87f56af059c5fd28c81110a8203" protocol=ttrpc version=3 Jun 20 18:27:09.400429 systemd[1]: Started cri-containerd-c13afd69c730da0969624a24704e1b684fa1e1dfeb70c4c6e4ddf38a8c9b36b4.scope - libcontainer container c13afd69c730da0969624a24704e1b684fa1e1dfeb70c4c6e4ddf38a8c9b36b4. Jun 20 18:27:09.458588 containerd[1884]: time="2025-06-20T18:27:09.458555729Z" level=info msg="StartContainer for \"c13afd69c730da0969624a24704e1b684fa1e1dfeb70c4c6e4ddf38a8c9b36b4\" returns successfully" Jun 20 18:27:09.533119 kubelet[3034]: I0620 18:27:09.533091 3034 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:09.927277 kubelet[3034]: E0620 18:27:09.927077 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-442b0d77ef\" not found" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:09.932667 kubelet[3034]: E0620 18:27:09.932605 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-442b0d77ef\" not found" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:09.935381 kubelet[3034]: E0620 18:27:09.935356 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.0-a-442b0d77ef\" not found" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:10.851614 kubelet[3034]: E0620 18:27:10.851418 3034 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4344.1.0-a-442b0d77ef.184ad391b0e72c56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.0-a-442b0d77ef,UID:ci-4344.1.0-a-442b0d77ef,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.0-a-442b0d77ef,},FirstTimestamp:2025-06-20 18:27:07.878517846 +0000 UTC m=+0.514681509,LastTimestamp:2025-06-20 18:27:07.878517846 +0000 UTC m=+0.514681509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.0-a-442b0d77ef,}" Jun 20 18:27:10.875766 kubelet[3034]: I0620 18:27:10.875686 3034 apiserver.go:52] "Watching apiserver" Jun 20 18:27:10.888131 kubelet[3034]: I0620 18:27:10.888083 3034 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:27:10.920376 kubelet[3034]: I0620 18:27:10.920127 3034 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:10.920376 kubelet[3034]: E0620 18:27:10.920166 3034 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344.1.0-a-442b0d77ef\": node \"ci-4344.1.0-a-442b0d77ef\" not found" Jun 20 18:27:10.936542 kubelet[3034]: I0620 18:27:10.936494 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:10.936818 kubelet[3034]: I0620 18:27:10.936726 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:10.991116 kubelet[3034]: I0620 18:27:10.991079 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.035200 kubelet[3034]: E0620 18:27:11.034923 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.035200 kubelet[3034]: E0620 18:27:11.035110 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-442b0d77ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.035431 kubelet[3034]: E0620 18:27:11.035415 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.035490 kubelet[3034]: I0620 18:27:11.035481 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.036826 kubelet[3034]: E0620 18:27:11.036810 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.036989 kubelet[3034]: I0620 18:27:11.036892 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.038191 kubelet[3034]: E0620 18:27:11.038171 3034 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-442b0d77ef\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.938807 kubelet[3034]: I0620 18:27:11.938673 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:11.952310 kubelet[3034]: I0620 18:27:11.952147 3034 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:27:11.991950 kubelet[3034]: I0620 18:27:11.991910 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:12.010192 kubelet[3034]: I0620 18:27:12.010066 3034 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:27:13.404810 systemd[1]: Reload requested from client PID 3304 ('systemctl') (unit session-9.scope)... Jun 20 18:27:13.404825 systemd[1]: Reloading... Jun 20 18:27:13.489319 zram_generator::config[3358]: No configuration found. Jun 20 18:27:13.556587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:27:13.655465 systemd[1]: Reloading finished in 250 ms. Jun 20 18:27:13.680857 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:27:13.700280 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:27:13.702349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:27:13.702503 systemd[1]: kubelet.service: Consumed 801ms CPU time, 127M memory peak. Jun 20 18:27:13.704786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:27:13.956177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:27:13.961561 (kubelet)[3415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:27:13.989271 kubelet[3415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:27:13.989271 kubelet[3415]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:27:13.989271 kubelet[3415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:27:13.989614 kubelet[3415]: I0620 18:27:13.989260 3415 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:27:13.994358 kubelet[3415]: I0620 18:27:13.994111 3415 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:27:13.994358 kubelet[3415]: I0620 18:27:13.994141 3415 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:27:13.994916 kubelet[3415]: I0620 18:27:13.994285 3415 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:27:13.997058 kubelet[3415]: I0620 18:27:13.996995 3415 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 18:27:13.999129 kubelet[3415]: I0620 18:27:13.999064 3415 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:27:14.003449 kubelet[3415]: I0620 18:27:14.003429 3415 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 18:27:14.007177 kubelet[3415]: I0620 18:27:14.007149 3415 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:27:14.008434 kubelet[3415]: I0620 18:27:14.008401 3415 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:27:14.008657 kubelet[3415]: I0620 18:27:14.008433 3415 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.0-a-442b0d77ef","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:27:14.008744 kubelet[3415]: I0620 18:27:14.008662 3415 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:27:14.008744 kubelet[3415]: I0620 18:27:14.008671 3415 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:27:14.008744 kubelet[3415]: I0620 18:27:14.008710 3415 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:27:14.008846 kubelet[3415]: I0620 18:27:14.008831 3415 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:27:14.008873 kubelet[3415]: I0620 18:27:14.008849 3415 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:27:14.008873 kubelet[3415]: I0620 18:27:14.008866 3415 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:27:14.008907 kubelet[3415]: I0620 18:27:14.008883 3415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:27:14.014829 kubelet[3415]: I0620 18:27:14.013348 3415 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 18:27:14.014829 kubelet[3415]: I0620 18:27:14.014462 3415 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:27:14.019963 kubelet[3415]: I0620 18:27:14.019943 3415 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:27:14.020052 kubelet[3415]: I0620 18:27:14.019980 3415 server.go:1289] "Started kubelet" Jun 20 18:27:14.022211 kubelet[3415]: I0620 18:27:14.022170 3415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:27:14.029776 kubelet[3415]: I0620 18:27:14.028899 3415 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:27:14.029914 kubelet[3415]: I0620 18:27:14.029826 3415 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:27:14.034628 kubelet[3415]: I0620 18:27:14.032085 3415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:27:14.034628 kubelet[3415]: I0620 18:27:14.032280 3415 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:27:14.034628 kubelet[3415]: I0620 18:27:14.033637 3415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:27:14.035659 kubelet[3415]: I0620 18:27:14.035639 3415 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:27:14.035833 kubelet[3415]: I0620 18:27:14.035808 3415 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:27:14.036232 kubelet[3415]: I0620 18:27:14.036040 3415 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:27:14.038891 kubelet[3415]: I0620 18:27:14.038872 3415 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:27:14.039074 kubelet[3415]: I0620 18:27:14.039058 3415 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:27:14.041815 kubelet[3415]: I0620 18:27:14.041784 3415 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:27:14.042813 kubelet[3415]: I0620 18:27:14.042755 3415 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:27:14.042813 kubelet[3415]: I0620 18:27:14.042814 3415 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:27:14.042890 kubelet[3415]: I0620 18:27:14.042845 3415 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:27:14.042890 kubelet[3415]: I0620 18:27:14.042853 3415 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:27:14.042929 kubelet[3415]: E0620 18:27:14.042891 3415 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:27:14.046189 kubelet[3415]: E0620 18:27:14.045388 3415 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:27:14.046614 kubelet[3415]: I0620 18:27:14.046595 3415 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:27:14.089839 kubelet[3415]: I0620 18:27:14.089793 3415 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:27:14.089839 kubelet[3415]: I0620 18:27:14.089829 3415 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:27:14.089839 kubelet[3415]: I0620 18:27:14.089852 3415 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:27:14.089997 kubelet[3415]: I0620 18:27:14.089982 3415 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:27:14.090014 kubelet[3415]: I0620 18:27:14.089990 3415 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:27:14.090014 kubelet[3415]: I0620 18:27:14.090004 3415 policy_none.go:49] "None policy: Start" Jun 20 18:27:14.090014 kubelet[3415]: I0620 18:27:14.090011 3415 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:27:14.090065 kubelet[3415]: I0620 18:27:14.090019 3415 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:27:14.090103 kubelet[3415]: I0620 18:27:14.090087 3415 state_mem.go:75] "Updated machine memory state" Jun 20 18:27:14.093383 kubelet[3415]: E0620 18:27:14.093358 3415 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:27:14.093769 kubelet[3415]: I0620 18:27:14.093752 3415 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:27:14.093823 kubelet[3415]: I0620 18:27:14.093768 3415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:27:14.094082 kubelet[3415]: I0620 18:27:14.094065 3415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:27:14.095645 kubelet[3415]: E0620 18:27:14.095625 3415 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:27:14.144235 kubelet[3415]: I0620 18:27:14.144156 3415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.144559 kubelet[3415]: I0620 18:27:14.144530 3415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.144899 kubelet[3415]: I0620 18:27:14.144863 3415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.160167 kubelet[3415]: I0620 18:27:14.160144 3415 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:27:14.160347 kubelet[3415]: E0620 18:27:14.160332 3415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.0-a-442b0d77ef\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.161015 kubelet[3415]: I0620 18:27:14.160994 3415 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:27:14.161375 kubelet[3415]: I0620 18:27:14.161354 3415 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:27:14.161429 kubelet[3415]: E0620 18:27:14.161406 3415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.202453 kubelet[3415]: I0620 18:27:14.202050 3415 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.219230 kubelet[3415]: I0620 18:27:14.219137 3415 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.219230 kubelet[3415]: I0620 18:27:14.219212 3415 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.336844 kubelet[3415]: I0620 18:27:14.336816 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.336844 kubelet[3415]: I0620 18:27:14.336846 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.337258 kubelet[3415]: I0620 18:27:14.336861 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.337258 kubelet[3415]: I0620 18:27:14.336883 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c4185d8791e65ad157bfb2dc613991f-kubeconfig\") pod \"kube-scheduler-ci-4344.1.0-a-442b0d77ef\" (UID: \"1c4185d8791e65ad157bfb2dc613991f\") " pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.337258 kubelet[3415]: I0620 18:27:14.336895 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f502ddef34e93a693ec03cb7f1b73f4d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" (UID: \"f502ddef34e93a693ec03cb7f1b73f4d\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.337258 kubelet[3415]: I0620 18:27:14.336943 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-ca-certs\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.337258 kubelet[3415]: I0620 18:27:14.336964 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f502ddef34e93a693ec03cb7f1b73f4d-ca-certs\") pod \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" (UID: \"f502ddef34e93a693ec03cb7f1b73f4d\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.337365 kubelet[3415]: I0620 18:27:14.336982 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f502ddef34e93a693ec03cb7f1b73f4d-k8s-certs\") pod \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" (UID: \"f502ddef34e93a693ec03cb7f1b73f4d\") " pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:14.337365 kubelet[3415]: I0620 18:27:14.336993 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8697989381433c218a8e723bdd957b24-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.0-a-442b0d77ef\" (UID: \"8697989381433c218a8e723bdd957b24\") " pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:15.013564 kubelet[3415]: I0620 18:27:15.013522 3415 apiserver.go:52] "Watching apiserver" Jun 20 18:27:15.036094 kubelet[3415]: I0620 18:27:15.036047 3415 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:27:15.072340 kubelet[3415]: I0620 18:27:15.072275 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.0-a-442b0d77ef" podStartSLOduration=4.072260132 podStartE2EDuration="4.072260132s" podCreationTimestamp="2025-06-20 18:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:15.053708972 +0000 UTC m=+1.089164991" watchObservedRunningTime="2025-06-20 18:27:15.072260132 +0000 UTC m=+1.107716151" Jun 20 18:27:15.073521 kubelet[3415]: I0620 18:27:15.073495 3415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:15.092627 kubelet[3415]: I0620 18:27:15.092598 3415 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:27:15.092798 kubelet[3415]: I0620 18:27:15.092762 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" podStartSLOduration=4.092752443 podStartE2EDuration="4.092752443s" podCreationTimestamp="2025-06-20 18:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:15.072431906 +0000 UTC m=+1.107887949" watchObservedRunningTime="2025-06-20 18:27:15.092752443 +0000 UTC m=+1.128208462" Jun 20 18:27:15.092946 kubelet[3415]: I0620 18:27:15.092918 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.0-a-442b0d77ef" podStartSLOduration=1.092912649 podStartE2EDuration="1.092912649s" podCreationTimestamp="2025-06-20 18:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:15.092803477 +0000 UTC m=+1.128259496" watchObservedRunningTime="2025-06-20 18:27:15.092912649 +0000 UTC m=+1.128368684" Jun 20 18:27:15.093013 kubelet[3415]: E0620 18:27:15.092998 3415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.0-a-442b0d77ef\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:18.443128 kubelet[3415]: I0620 18:27:18.443054 3415 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:27:18.443782 containerd[1884]: time="2025-06-20T18:27:18.443747232Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:27:18.444358 kubelet[3415]: I0620 18:27:18.443920 3415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:27:19.323615 systemd[1]: Created slice kubepods-besteffort-pod03938350_fb0d_4438_ab82_1c981e8504da.slice - libcontainer container kubepods-besteffort-pod03938350_fb0d_4438_ab82_1c981e8504da.slice. Jun 20 18:27:19.362874 kubelet[3415]: I0620 18:27:19.362824 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03938350-fb0d-4438-ab82-1c981e8504da-lib-modules\") pod \"kube-proxy-4zwk6\" (UID: \"03938350-fb0d-4438-ab82-1c981e8504da\") " pod="kube-system/kube-proxy-4zwk6" Jun 20 18:27:19.362874 kubelet[3415]: I0620 18:27:19.362871 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03938350-fb0d-4438-ab82-1c981e8504da-xtables-lock\") pod \"kube-proxy-4zwk6\" (UID: \"03938350-fb0d-4438-ab82-1c981e8504da\") " pod="kube-system/kube-proxy-4zwk6" Jun 20 18:27:19.362874 kubelet[3415]: I0620 18:27:19.362886 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7lcw\" (UniqueName: \"kubernetes.io/projected/03938350-fb0d-4438-ab82-1c981e8504da-kube-api-access-s7lcw\") pod \"kube-proxy-4zwk6\" (UID: \"03938350-fb0d-4438-ab82-1c981e8504da\") " pod="kube-system/kube-proxy-4zwk6" Jun 20 18:27:19.363076 kubelet[3415]: I0620 18:27:19.362900 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03938350-fb0d-4438-ab82-1c981e8504da-kube-proxy\") pod \"kube-proxy-4zwk6\" (UID: \"03938350-fb0d-4438-ab82-1c981e8504da\") " pod="kube-system/kube-proxy-4zwk6" Jun 20 18:27:19.606754 systemd[1]: Created slice kubepods-besteffort-pode48d34d0_625e_4931_b8ec_6799a2999266.slice - libcontainer container kubepods-besteffort-pode48d34d0_625e_4931_b8ec_6799a2999266.slice. Jun 20 18:27:19.631964 containerd[1884]: time="2025-06-20T18:27:19.631884761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zwk6,Uid:03938350-fb0d-4438-ab82-1c981e8504da,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:19.664269 kubelet[3415]: I0620 18:27:19.664197 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e48d34d0-625e-4931-b8ec-6799a2999266-var-lib-calico\") pod \"tigera-operator-68f7c7984d-7tfxx\" (UID: \"e48d34d0-625e-4931-b8ec-6799a2999266\") " pod="tigera-operator/tigera-operator-68f7c7984d-7tfxx" Jun 20 18:27:19.664269 kubelet[3415]: I0620 18:27:19.664266 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgbcm\" (UniqueName: \"kubernetes.io/projected/e48d34d0-625e-4931-b8ec-6799a2999266-kube-api-access-xgbcm\") pod \"tigera-operator-68f7c7984d-7tfxx\" (UID: \"e48d34d0-625e-4931-b8ec-6799a2999266\") " pod="tigera-operator/tigera-operator-68f7c7984d-7tfxx" Jun 20 18:27:19.698832 containerd[1884]: time="2025-06-20T18:27:19.698793538Z" level=info msg="connecting to shim 8c24c2adc1f18b48849537ee4d02765a6a9e4ce3300864f44736a8f2138eb25a" address="unix:///run/containerd/s/0325326433b3d31e594904b1c5b7a41dee3cb8157b70371c86e7b49713ed0b13" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:19.720436 systemd[1]: Started cri-containerd-8c24c2adc1f18b48849537ee4d02765a6a9e4ce3300864f44736a8f2138eb25a.scope - libcontainer container 8c24c2adc1f18b48849537ee4d02765a6a9e4ce3300864f44736a8f2138eb25a. Jun 20 18:27:19.743548 containerd[1884]: time="2025-06-20T18:27:19.743437377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4zwk6,Uid:03938350-fb0d-4438-ab82-1c981e8504da,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c24c2adc1f18b48849537ee4d02765a6a9e4ce3300864f44736a8f2138eb25a\"" Jun 20 18:27:19.753373 containerd[1884]: time="2025-06-20T18:27:19.753327181Z" level=info msg="CreateContainer within sandbox \"8c24c2adc1f18b48849537ee4d02765a6a9e4ce3300864f44736a8f2138eb25a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:27:19.788965 containerd[1884]: time="2025-06-20T18:27:19.788920350Z" level=info msg="Container c2ef3ec98a28b54aea2451f9bb4e1dad98ff42c936d716b15c820c0a93d37abf: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:19.811950 containerd[1884]: time="2025-06-20T18:27:19.811847976Z" level=info msg="CreateContainer within sandbox \"8c24c2adc1f18b48849537ee4d02765a6a9e4ce3300864f44736a8f2138eb25a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2ef3ec98a28b54aea2451f9bb4e1dad98ff42c936d716b15c820c0a93d37abf\"" Jun 20 18:27:19.813412 containerd[1884]: time="2025-06-20T18:27:19.813381759Z" level=info msg="StartContainer for \"c2ef3ec98a28b54aea2451f9bb4e1dad98ff42c936d716b15c820c0a93d37abf\"" Jun 20 18:27:19.814530 containerd[1884]: time="2025-06-20T18:27:19.814502991Z" level=info msg="connecting to shim c2ef3ec98a28b54aea2451f9bb4e1dad98ff42c936d716b15c820c0a93d37abf" address="unix:///run/containerd/s/0325326433b3d31e594904b1c5b7a41dee3cb8157b70371c86e7b49713ed0b13" protocol=ttrpc version=3 Jun 20 18:27:19.831427 systemd[1]: Started cri-containerd-c2ef3ec98a28b54aea2451f9bb4e1dad98ff42c936d716b15c820c0a93d37abf.scope - libcontainer container c2ef3ec98a28b54aea2451f9bb4e1dad98ff42c936d716b15c820c0a93d37abf. Jun 20 18:27:19.863330 containerd[1884]: time="2025-06-20T18:27:19.862890317Z" level=info msg="StartContainer for \"c2ef3ec98a28b54aea2451f9bb4e1dad98ff42c936d716b15c820c0a93d37abf\" returns successfully" Jun 20 18:27:19.910237 containerd[1884]: time="2025-06-20T18:27:19.910184435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-7tfxx,Uid:e48d34d0-625e-4931-b8ec-6799a2999266,Namespace:tigera-operator,Attempt:0,}" Jun 20 18:27:19.973522 containerd[1884]: time="2025-06-20T18:27:19.973442601Z" level=info msg="connecting to shim be2ad3290e1ecc705fc8c22f9f1ab964283e3aac7e284b450a76e16952ca02da" address="unix:///run/containerd/s/f7ed2da3a944a7f5cb4b219232cb185118e057ec5ca9104a6391ba86420a6aff" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:19.989443 systemd[1]: Started cri-containerd-be2ad3290e1ecc705fc8c22f9f1ab964283e3aac7e284b450a76e16952ca02da.scope - libcontainer container be2ad3290e1ecc705fc8c22f9f1ab964283e3aac7e284b450a76e16952ca02da. Jun 20 18:27:20.020005 containerd[1884]: time="2025-06-20T18:27:20.019966667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-7tfxx,Uid:e48d34d0-625e-4931-b8ec-6799a2999266,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"be2ad3290e1ecc705fc8c22f9f1ab964283e3aac7e284b450a76e16952ca02da\"" Jun 20 18:27:20.022329 containerd[1884]: time="2025-06-20T18:27:20.022301591Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 18:27:20.098240 kubelet[3415]: I0620 18:27:20.098116 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4zwk6" podStartSLOduration=1.098097952 podStartE2EDuration="1.098097952s" podCreationTimestamp="2025-06-20 18:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:20.09726597 +0000 UTC m=+6.132721989" watchObservedRunningTime="2025-06-20 18:27:20.098097952 +0000 UTC m=+6.133553971" Jun 20 18:27:21.689723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039453502.mount: Deactivated successfully. Jun 20 18:27:22.330036 containerd[1884]: time="2025-06-20T18:27:22.329519636Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:22.334117 containerd[1884]: time="2025-06-20T18:27:22.334074576Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=22149772" Jun 20 18:27:22.343626 containerd[1884]: time="2025-06-20T18:27:22.343591918Z" level=info msg="ImageCreate event name:\"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:22.351532 containerd[1884]: time="2025-06-20T18:27:22.351504227Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:22.351839 containerd[1884]: time="2025-06-20T18:27:22.351813614Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"22145767\" in 2.329383698s" Jun 20 18:27:22.351897 containerd[1884]: time="2025-06-20T18:27:22.351841823Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:a609dbfb508b74674e197a0df0042072d3c085d1c48be4041b1633d3d69e3d5d\"" Jun 20 18:27:22.360903 containerd[1884]: time="2025-06-20T18:27:22.360843635Z" level=info msg="CreateContainer within sandbox \"be2ad3290e1ecc705fc8c22f9f1ab964283e3aac7e284b450a76e16952ca02da\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 18:27:22.404484 containerd[1884]: time="2025-06-20T18:27:22.404428828Z" level=info msg="Container 316d57c00050da83a999c1b05f3f1b6307010d5aba910e5907f668abd5c7d521: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:22.427564 containerd[1884]: time="2025-06-20T18:27:22.427524308Z" level=info msg="CreateContainer within sandbox \"be2ad3290e1ecc705fc8c22f9f1ab964283e3aac7e284b450a76e16952ca02da\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"316d57c00050da83a999c1b05f3f1b6307010d5aba910e5907f668abd5c7d521\"" Jun 20 18:27:22.428245 containerd[1884]: time="2025-06-20T18:27:22.428097720Z" level=info msg="StartContainer for \"316d57c00050da83a999c1b05f3f1b6307010d5aba910e5907f668abd5c7d521\"" Jun 20 18:27:22.429665 containerd[1884]: time="2025-06-20T18:27:22.429604494Z" level=info msg="connecting to shim 316d57c00050da83a999c1b05f3f1b6307010d5aba910e5907f668abd5c7d521" address="unix:///run/containerd/s/f7ed2da3a944a7f5cb4b219232cb185118e057ec5ca9104a6391ba86420a6aff" protocol=ttrpc version=3 Jun 20 18:27:22.445434 systemd[1]: Started cri-containerd-316d57c00050da83a999c1b05f3f1b6307010d5aba910e5907f668abd5c7d521.scope - libcontainer container 316d57c00050da83a999c1b05f3f1b6307010d5aba910e5907f668abd5c7d521. Jun 20 18:27:22.472224 containerd[1884]: time="2025-06-20T18:27:22.472158570Z" level=info msg="StartContainer for \"316d57c00050da83a999c1b05f3f1b6307010d5aba910e5907f668abd5c7d521\" returns successfully" Jun 20 18:27:23.100982 kubelet[3415]: I0620 18:27:23.100920 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-7tfxx" podStartSLOduration=1.769043473 podStartE2EDuration="4.100904828s" podCreationTimestamp="2025-06-20 18:27:19 +0000 UTC" firstStartedPulling="2025-06-20 18:27:20.021153398 +0000 UTC m=+6.056609417" lastFinishedPulling="2025-06-20 18:27:22.353014753 +0000 UTC m=+8.388470772" observedRunningTime="2025-06-20 18:27:23.100763495 +0000 UTC m=+9.136219554" watchObservedRunningTime="2025-06-20 18:27:23.100904828 +0000 UTC m=+9.136360847" Jun 20 18:27:27.565886 sudo[2401]: pam_unix(sudo:session): session closed for user root Jun 20 18:27:27.643785 sshd[2400]: Connection closed by 10.200.16.10 port 60422 Jun 20 18:27:27.646472 sshd-session[2398]: pam_unix(sshd:session): session closed for user core Jun 20 18:27:27.648933 systemd[1]: sshd@6-10.200.20.17:22-10.200.16.10:60422.service: Deactivated successfully. Jun 20 18:27:27.651921 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:27:27.653400 systemd[1]: session-9.scope: Consumed 4.198s CPU time, 229.2M memory peak. Jun 20 18:27:27.657064 systemd-logind[1867]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:27:27.658897 systemd-logind[1867]: Removed session 9. Jun 20 18:27:31.823412 systemd[1]: Created slice kubepods-besteffort-pod911e811e_18ef_4cbc_b456_f3e5987b3272.slice - libcontainer container kubepods-besteffort-pod911e811e_18ef_4cbc_b456_f3e5987b3272.slice. Jun 20 18:27:31.931561 kubelet[3415]: I0620 18:27:31.931508 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/911e811e-18ef-4cbc-b456-f3e5987b3272-typha-certs\") pod \"calico-typha-766f5db5db-npfxc\" (UID: \"911e811e-18ef-4cbc-b456-f3e5987b3272\") " pod="calico-system/calico-typha-766f5db5db-npfxc" Jun 20 18:27:31.932037 kubelet[3415]: I0620 18:27:31.931955 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp72b\" (UniqueName: \"kubernetes.io/projected/911e811e-18ef-4cbc-b456-f3e5987b3272-kube-api-access-xp72b\") pod \"calico-typha-766f5db5db-npfxc\" (UID: \"911e811e-18ef-4cbc-b456-f3e5987b3272\") " pod="calico-system/calico-typha-766f5db5db-npfxc" Jun 20 18:27:31.932037 kubelet[3415]: I0620 18:27:31.931997 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/911e811e-18ef-4cbc-b456-f3e5987b3272-tigera-ca-bundle\") pod \"calico-typha-766f5db5db-npfxc\" (UID: \"911e811e-18ef-4cbc-b456-f3e5987b3272\") " pod="calico-system/calico-typha-766f5db5db-npfxc" Jun 20 18:27:32.021540 systemd[1]: Created slice kubepods-besteffort-pod8080209e_fadf_4ffd_8c7d_871bd9a49ce9.slice - libcontainer container kubepods-besteffort-pod8080209e_fadf_4ffd_8c7d_871bd9a49ce9.slice. Jun 20 18:27:32.033008 kubelet[3415]: I0620 18:27:32.032457 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-cni-log-dir\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033008 kubelet[3415]: I0620 18:27:32.032489 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-var-lib-calico\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033008 kubelet[3415]: I0620 18:27:32.032500 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkmk\" (UniqueName: \"kubernetes.io/projected/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-kube-api-access-6jkmk\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033008 kubelet[3415]: I0620 18:27:32.032514 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-policysync\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033008 kubelet[3415]: I0620 18:27:32.032525 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-tigera-ca-bundle\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033181 kubelet[3415]: I0620 18:27:32.032544 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-flexvol-driver-host\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033181 kubelet[3415]: I0620 18:27:32.032555 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-var-run-calico\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033181 kubelet[3415]: I0620 18:27:32.032565 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-cni-bin-dir\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033181 kubelet[3415]: I0620 18:27:32.032575 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-cni-net-dir\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.033181 kubelet[3415]: I0620 18:27:32.032583 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-lib-modules\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.034101 kubelet[3415]: I0620 18:27:32.032593 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-node-certs\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.034101 kubelet[3415]: I0620 18:27:32.032608 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8080209e-fadf-4ffd-8c7d-871bd9a49ce9-xtables-lock\") pod \"calico-node-sps5r\" (UID: \"8080209e-fadf-4ffd-8c7d-871bd9a49ce9\") " pod="calico-system/calico-node-sps5r" Jun 20 18:27:32.127193 containerd[1884]: time="2025-06-20T18:27:32.126694549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766f5db5db-npfxc,Uid:911e811e-18ef-4cbc-b456-f3e5987b3272,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:32.134644 kubelet[3415]: E0620 18:27:32.134604 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.134644 kubelet[3415]: W0620 18:27:32.134628 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.134644 kubelet[3415]: E0620 18:27:32.134648 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.135141 kubelet[3415]: E0620 18:27:32.135118 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.135141 kubelet[3415]: W0620 18:27:32.135135 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.135499 kubelet[3415]: E0620 18:27:32.135147 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.135636 kubelet[3415]: E0620 18:27:32.135603 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.135636 kubelet[3415]: W0620 18:27:32.135618 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.135701 kubelet[3415]: E0620 18:27:32.135630 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.136338 kubelet[3415]: E0620 18:27:32.136320 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.136338 kubelet[3415]: W0620 18:27:32.136334 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.136871 kubelet[3415]: E0620 18:27:32.136346 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.142638 kubelet[3415]: E0620 18:27:32.142579 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.142638 kubelet[3415]: W0620 18:27:32.142600 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.142638 kubelet[3415]: E0620 18:27:32.142611 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.169499 kubelet[3415]: E0620 18:27:32.169457 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.169499 kubelet[3415]: W0620 18:27:32.169480 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.169702 kubelet[3415]: E0620 18:27:32.169660 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.199141 kubelet[3415]: E0620 18:27:32.198933 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nnz8" podUID="0a9e252c-f052-4e18-abf4-a391b1d4aaf8" Jun 20 18:27:32.199968 containerd[1884]: time="2025-06-20T18:27:32.199838785Z" level=info msg="connecting to shim bbadd5511407b91fdb3b16bd5dc67dea4db69af57e2f7d11f3400f947dc5682e" address="unix:///run/containerd/s/3bffa04dc7b46c97cc019d24c2561be54eaf3875a53c6017d583f250fd4930c1" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:32.222880 systemd[1]: Started cri-containerd-bbadd5511407b91fdb3b16bd5dc67dea4db69af57e2f7d11f3400f947dc5682e.scope - libcontainer container bbadd5511407b91fdb3b16bd5dc67dea4db69af57e2f7d11f3400f947dc5682e. Jun 20 18:27:32.233548 kubelet[3415]: E0620 18:27:32.233451 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.233548 kubelet[3415]: W0620 18:27:32.233484 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.233548 kubelet[3415]: E0620 18:27:32.233503 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.234582 kubelet[3415]: E0620 18:27:32.234561 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.234739 kubelet[3415]: W0620 18:27:32.234669 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.234739 kubelet[3415]: E0620 18:27:32.234709 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.234973 kubelet[3415]: E0620 18:27:32.234960 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.235342 kubelet[3415]: W0620 18:27:32.235057 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.235342 kubelet[3415]: E0620 18:27:32.235083 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.236415 kubelet[3415]: E0620 18:27:32.236387 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.236570 kubelet[3415]: W0620 18:27:32.236404 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.236570 kubelet[3415]: E0620 18:27:32.236509 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.236941 kubelet[3415]: E0620 18:27:32.236837 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.236941 kubelet[3415]: W0620 18:27:32.236849 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.236941 kubelet[3415]: E0620 18:27:32.236859 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.237090 kubelet[3415]: E0620 18:27:32.237078 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.237150 kubelet[3415]: W0620 18:27:32.237140 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.237214 kubelet[3415]: E0620 18:27:32.237191 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.238168 kubelet[3415]: E0620 18:27:32.238147 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.238168 kubelet[3415]: W0620 18:27:32.238192 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.238168 kubelet[3415]: E0620 18:27:32.238205 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.238512 kubelet[3415]: E0620 18:27:32.238500 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.238607 kubelet[3415]: W0620 18:27:32.238550 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.238607 kubelet[3415]: E0620 18:27:32.238562 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.239594 kubelet[3415]: E0620 18:27:32.239557 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.239594 kubelet[3415]: W0620 18:27:32.239569 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.239594 kubelet[3415]: E0620 18:27:32.239580 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.239842 kubelet[3415]: E0620 18:27:32.239823 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.239954 kubelet[3415]: W0620 18:27:32.239832 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.239954 kubelet[3415]: E0620 18:27:32.239919 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.240600 kubelet[3415]: E0620 18:27:32.240535 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.240600 kubelet[3415]: W0620 18:27:32.240547 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.240600 kubelet[3415]: E0620 18:27:32.240557 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.241663 kubelet[3415]: E0620 18:27:32.241640 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.241862 kubelet[3415]: W0620 18:27:32.241655 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.241862 kubelet[3415]: E0620 18:27:32.241754 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.242053 kubelet[3415]: E0620 18:27:32.242042 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.242469 kubelet[3415]: W0620 18:27:32.242451 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.243337 kubelet[3415]: E0620 18:27:32.242530 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.243618 kubelet[3415]: E0620 18:27:32.243604 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.243755 kubelet[3415]: W0620 18:27:32.243680 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.243755 kubelet[3415]: E0620 18:27:32.243694 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.244033 kubelet[3415]: E0620 18:27:32.243972 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.244033 kubelet[3415]: W0620 18:27:32.243984 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.244033 kubelet[3415]: E0620 18:27:32.243993 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.244737 kubelet[3415]: E0620 18:27:32.244519 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.244737 kubelet[3415]: W0620 18:27:32.244533 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.244737 kubelet[3415]: E0620 18:27:32.244543 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.244832 kubelet[3415]: E0620 18:27:32.244820 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.244856 kubelet[3415]: W0620 18:27:32.244831 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.244856 kubelet[3415]: E0620 18:27:32.244843 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.244884 kubelet[3415]: I0620 18:27:32.244860 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a9e252c-f052-4e18-abf4-a391b1d4aaf8-kubelet-dir\") pod \"csi-node-driver-5nnz8\" (UID: \"0a9e252c-f052-4e18-abf4-a391b1d4aaf8\") " pod="calico-system/csi-node-driver-5nnz8" Jun 20 18:27:32.245195 kubelet[3415]: E0620 18:27:32.245097 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.245195 kubelet[3415]: W0620 18:27:32.245113 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.245195 kubelet[3415]: E0620 18:27:32.245123 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.245195 kubelet[3415]: I0620 18:27:32.245138 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a9e252c-f052-4e18-abf4-a391b1d4aaf8-registration-dir\") pod \"csi-node-driver-5nnz8\" (UID: \"0a9e252c-f052-4e18-abf4-a391b1d4aaf8\") " pod="calico-system/csi-node-driver-5nnz8" Jun 20 18:27:32.245363 kubelet[3415]: E0620 18:27:32.245337 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.245363 kubelet[3415]: W0620 18:27:32.245347 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.245363 kubelet[3415]: E0620 18:27:32.245355 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.245363 kubelet[3415]: I0620 18:27:32.245373 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a9e252c-f052-4e18-abf4-a391b1d4aaf8-socket-dir\") pod \"csi-node-driver-5nnz8\" (UID: \"0a9e252c-f052-4e18-abf4-a391b1d4aaf8\") " pod="calico-system/csi-node-driver-5nnz8" Jun 20 18:27:32.245752 kubelet[3415]: E0620 18:27:32.245736 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.245752 kubelet[3415]: W0620 18:27:32.245748 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.245752 kubelet[3415]: E0620 18:27:32.245757 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.245845 kubelet[3415]: I0620 18:27:32.245778 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0a9e252c-f052-4e18-abf4-a391b1d4aaf8-varrun\") pod \"csi-node-driver-5nnz8\" (UID: \"0a9e252c-f052-4e18-abf4-a391b1d4aaf8\") " pod="calico-system/csi-node-driver-5nnz8" Jun 20 18:27:32.245845 kubelet[3415]: E0620 18:27:32.245943 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.245845 kubelet[3415]: W0620 18:27:32.245951 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.245845 kubelet[3415]: E0620 18:27:32.245959 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.245845 kubelet[3415]: E0620 18:27:32.246042 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.245845 kubelet[3415]: W0620 18:27:32.246046 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.245845 kubelet[3415]: E0620 18:27:32.246051 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.247525 kubelet[3415]: E0620 18:27:32.247512 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.247600 kubelet[3415]: W0620 18:27:32.247588 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.247688 kubelet[3415]: E0620 18:27:32.247676 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.248123 kubelet[3415]: E0620 18:27:32.248106 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.248569 kubelet[3415]: W0620 18:27:32.248450 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.248569 kubelet[3415]: E0620 18:27:32.248471 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.249398 kubelet[3415]: E0620 18:27:32.249383 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.249566 kubelet[3415]: W0620 18:27:32.249433 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.249566 kubelet[3415]: E0620 18:27:32.249445 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.250645 kubelet[3415]: E0620 18:27:32.250556 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.250645 kubelet[3415]: W0620 18:27:32.250571 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.250645 kubelet[3415]: E0620 18:27:32.250582 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.251343 kubelet[3415]: E0620 18:27:32.251245 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.251581 kubelet[3415]: W0620 18:27:32.251494 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.251581 kubelet[3415]: E0620 18:27:32.251514 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.252142 kubelet[3415]: E0620 18:27:32.252127 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.252477 kubelet[3415]: W0620 18:27:32.252219 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.252477 kubelet[3415]: E0620 18:27:32.252237 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.253447 kubelet[3415]: E0620 18:27:32.253355 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.253447 kubelet[3415]: W0620 18:27:32.253373 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.253447 kubelet[3415]: E0620 18:27:32.253385 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.254368 kubelet[3415]: E0620 18:27:32.254341 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.254494 kubelet[3415]: W0620 18:27:32.254435 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.254494 kubelet[3415]: E0620 18:27:32.254453 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.254887 kubelet[3415]: E0620 18:27:32.254812 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.254887 kubelet[3415]: W0620 18:27:32.254826 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.254887 kubelet[3415]: E0620 18:27:32.254837 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.256647 kubelet[3415]: E0620 18:27:32.256547 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.256647 kubelet[3415]: W0620 18:27:32.256561 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.256647 kubelet[3415]: E0620 18:27:32.256573 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.279983 containerd[1884]: time="2025-06-20T18:27:32.279938551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766f5db5db-npfxc,Uid:911e811e-18ef-4cbc-b456-f3e5987b3272,Namespace:calico-system,Attempt:0,} returns sandbox id \"bbadd5511407b91fdb3b16bd5dc67dea4db69af57e2f7d11f3400f947dc5682e\"" Jun 20 18:27:32.283013 containerd[1884]: time="2025-06-20T18:27:32.282749036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 18:27:32.325787 containerd[1884]: time="2025-06-20T18:27:32.325749748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sps5r,Uid:8080209e-fadf-4ffd-8c7d-871bd9a49ce9,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:32.348201 kubelet[3415]: E0620 18:27:32.348163 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.348201 kubelet[3415]: W0620 18:27:32.348185 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.348201 kubelet[3415]: E0620 18:27:32.348205 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349039 kubelet[3415]: E0620 18:27:32.348416 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349039 kubelet[3415]: W0620 18:27:32.348424 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349039 kubelet[3415]: E0620 18:27:32.348432 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349039 kubelet[3415]: E0620 18:27:32.348553 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349039 kubelet[3415]: W0620 18:27:32.348558 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349039 kubelet[3415]: E0620 18:27:32.348564 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349039 kubelet[3415]: E0620 18:27:32.348662 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349039 kubelet[3415]: W0620 18:27:32.348673 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349039 kubelet[3415]: E0620 18:27:32.348679 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349039 kubelet[3415]: E0620 18:27:32.348777 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349196 kubelet[3415]: W0620 18:27:32.348782 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349196 kubelet[3415]: E0620 18:27:32.348787 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349196 kubelet[3415]: I0620 18:27:32.348800 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6j7\" (UniqueName: \"kubernetes.io/projected/0a9e252c-f052-4e18-abf4-a391b1d4aaf8-kube-api-access-hm6j7\") pod \"csi-node-driver-5nnz8\" (UID: \"0a9e252c-f052-4e18-abf4-a391b1d4aaf8\") " pod="calico-system/csi-node-driver-5nnz8" Jun 20 18:27:32.349196 kubelet[3415]: E0620 18:27:32.348935 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349196 kubelet[3415]: W0620 18:27:32.348942 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349196 kubelet[3415]: E0620 18:27:32.348947 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349196 kubelet[3415]: E0620 18:27:32.349029 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349196 kubelet[3415]: W0620 18:27:32.349033 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349196 kubelet[3415]: E0620 18:27:32.349038 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349767 kubelet[3415]: E0620 18:27:32.349144 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349767 kubelet[3415]: W0620 18:27:32.349150 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349767 kubelet[3415]: E0620 18:27:32.349155 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349767 kubelet[3415]: E0620 18:27:32.349253 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349767 kubelet[3415]: W0620 18:27:32.349257 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349767 kubelet[3415]: E0620 18:27:32.349264 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349767 kubelet[3415]: E0620 18:27:32.349360 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.349767 kubelet[3415]: W0620 18:27:32.349365 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.349767 kubelet[3415]: E0620 18:27:32.349370 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.349767 kubelet[3415]: E0620 18:27:32.349506 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.350717 kubelet[3415]: W0620 18:27:32.349511 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.350717 kubelet[3415]: E0620 18:27:32.349516 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.350717 kubelet[3415]: E0620 18:27:32.349785 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.350717 kubelet[3415]: W0620 18:27:32.349801 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.350717 kubelet[3415]: E0620 18:27:32.349810 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.350717 kubelet[3415]: E0620 18:27:32.349940 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.350717 kubelet[3415]: W0620 18:27:32.349962 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.350717 kubelet[3415]: E0620 18:27:32.349971 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.350717 kubelet[3415]: E0620 18:27:32.350076 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.350717 kubelet[3415]: W0620 18:27:32.350081 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.351086 kubelet[3415]: E0620 18:27:32.350087 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.351086 kubelet[3415]: E0620 18:27:32.350863 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.351086 kubelet[3415]: W0620 18:27:32.350874 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.351086 kubelet[3415]: E0620 18:27:32.350884 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.351086 kubelet[3415]: E0620 18:27:32.351048 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.351086 kubelet[3415]: W0620 18:27:32.351054 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.351086 kubelet[3415]: E0620 18:27:32.351077 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.351086 kubelet[3415]: E0620 18:27:32.351204 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.351086 kubelet[3415]: W0620 18:27:32.351210 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.351086 kubelet[3415]: E0620 18:27:32.351229 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.352100 kubelet[3415]: E0620 18:27:32.351379 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.352100 kubelet[3415]: W0620 18:27:32.351385 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.352100 kubelet[3415]: E0620 18:27:32.351392 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.352100 kubelet[3415]: E0620 18:27:32.351529 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.352100 kubelet[3415]: W0620 18:27:32.351535 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.352100 kubelet[3415]: E0620 18:27:32.351541 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.352100 kubelet[3415]: E0620 18:27:32.351690 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.352100 kubelet[3415]: W0620 18:27:32.351710 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.352100 kubelet[3415]: E0620 18:27:32.351717 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.352100 kubelet[3415]: E0620 18:27:32.351833 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.352479 kubelet[3415]: W0620 18:27:32.351839 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.352479 kubelet[3415]: E0620 18:27:32.351845 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.352479 kubelet[3415]: E0620 18:27:32.352019 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.352479 kubelet[3415]: W0620 18:27:32.352025 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.352479 kubelet[3415]: E0620 18:27:32.352032 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.352479 kubelet[3415]: E0620 18:27:32.352161 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.352479 kubelet[3415]: W0620 18:27:32.352167 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.352479 kubelet[3415]: E0620 18:27:32.352173 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.408331 containerd[1884]: time="2025-06-20T18:27:32.407438499Z" level=info msg="connecting to shim 619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5" address="unix:///run/containerd/s/e39c73378cc747dafb6db8943cf442520a3299d182f3c4182b494f2ab08af434" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:32.428764 systemd[1]: Started cri-containerd-619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5.scope - libcontainer container 619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5. Jun 20 18:27:32.450408 kubelet[3415]: E0620 18:27:32.450271 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.451544 kubelet[3415]: W0620 18:27:32.451329 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.451544 kubelet[3415]: E0620 18:27:32.451360 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.451887 kubelet[3415]: E0620 18:27:32.451867 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.452780 kubelet[3415]: W0620 18:27:32.452057 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.452780 kubelet[3415]: E0620 18:27:32.452077 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.453160 kubelet[3415]: E0620 18:27:32.453132 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.453216 kubelet[3415]: W0620 18:27:32.453180 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.453216 kubelet[3415]: E0620 18:27:32.453195 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.453699 kubelet[3415]: E0620 18:27:32.453680 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.453742 kubelet[3415]: W0620 18:27:32.453697 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.453742 kubelet[3415]: E0620 18:27:32.453731 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.454018 kubelet[3415]: E0620 18:27:32.453998 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.454018 kubelet[3415]: W0620 18:27:32.454010 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.454090 kubelet[3415]: E0620 18:27:32.454046 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:32.462703 containerd[1884]: time="2025-06-20T18:27:32.462667483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sps5r,Uid:8080209e-fadf-4ffd-8c7d-871bd9a49ce9,Namespace:calico-system,Attempt:0,} returns sandbox id \"619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5\"" Jun 20 18:27:32.465284 kubelet[3415]: E0620 18:27:32.464769 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:32.465284 kubelet[3415]: W0620 18:27:32.465069 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:32.465284 kubelet[3415]: E0620 18:27:32.465085 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:33.558824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3155134768.mount: Deactivated successfully. Jun 20 18:27:34.046723 kubelet[3415]: E0620 18:27:34.046688 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nnz8" podUID="0a9e252c-f052-4e18-abf4-a391b1d4aaf8" Jun 20 18:27:34.125598 containerd[1884]: time="2025-06-20T18:27:34.125098765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:34.129816 containerd[1884]: time="2025-06-20T18:27:34.129784387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=33070817" Jun 20 18:27:34.138413 containerd[1884]: time="2025-06-20T18:27:34.138359414Z" level=info msg="ImageCreate event name:\"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:34.144916 containerd[1884]: time="2025-06-20T18:27:34.144798059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:34.145733 containerd[1884]: time="2025-06-20T18:27:34.145640973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"33070671\" in 1.862410888s" Jun 20 18:27:34.145733 containerd[1884]: time="2025-06-20T18:27:34.145670334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:1262cbfe18a2279607d44e272e4adfb90c58d0fddc53d91b584a126a76dfe521\"" Jun 20 18:27:34.147206 containerd[1884]: time="2025-06-20T18:27:34.146974010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 18:27:34.166901 containerd[1884]: time="2025-06-20T18:27:34.166864292Z" level=info msg="CreateContainer within sandbox \"bbadd5511407b91fdb3b16bd5dc67dea4db69af57e2f7d11f3400f947dc5682e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 18:27:34.209634 containerd[1884]: time="2025-06-20T18:27:34.209592280Z" level=info msg="Container e000acbdc34adf8b761d1729eb9cb0e6ca661e038adb1507b70cad384c253fa2: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:34.234919 containerd[1884]: time="2025-06-20T18:27:34.234833798Z" level=info msg="CreateContainer within sandbox \"bbadd5511407b91fdb3b16bd5dc67dea4db69af57e2f7d11f3400f947dc5682e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e000acbdc34adf8b761d1729eb9cb0e6ca661e038adb1507b70cad384c253fa2\"" Jun 20 18:27:34.236147 containerd[1884]: time="2025-06-20T18:27:34.236076673Z" level=info msg="StartContainer for \"e000acbdc34adf8b761d1729eb9cb0e6ca661e038adb1507b70cad384c253fa2\"" Jun 20 18:27:34.237726 containerd[1884]: time="2025-06-20T18:27:34.237651675Z" level=info msg="connecting to shim e000acbdc34adf8b761d1729eb9cb0e6ca661e038adb1507b70cad384c253fa2" address="unix:///run/containerd/s/3bffa04dc7b46c97cc019d24c2561be54eaf3875a53c6017d583f250fd4930c1" protocol=ttrpc version=3 Jun 20 18:27:34.256462 systemd[1]: Started cri-containerd-e000acbdc34adf8b761d1729eb9cb0e6ca661e038adb1507b70cad384c253fa2.scope - libcontainer container e000acbdc34adf8b761d1729eb9cb0e6ca661e038adb1507b70cad384c253fa2. Jun 20 18:27:34.292452 containerd[1884]: time="2025-06-20T18:27:34.292417614Z" level=info msg="StartContainer for \"e000acbdc34adf8b761d1729eb9cb0e6ca661e038adb1507b70cad384c253fa2\" returns successfully" Jun 20 18:27:35.130117 kubelet[3415]: I0620 18:27:35.130056 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-766f5db5db-npfxc" podStartSLOduration=2.265584875 podStartE2EDuration="4.130038894s" podCreationTimestamp="2025-06-20 18:27:31 +0000 UTC" firstStartedPulling="2025-06-20 18:27:32.282199768 +0000 UTC m=+18.317655787" lastFinishedPulling="2025-06-20 18:27:34.146653787 +0000 UTC m=+20.182109806" observedRunningTime="2025-06-20 18:27:35.128551134 +0000 UTC m=+21.164007153" watchObservedRunningTime="2025-06-20 18:27:35.130038894 +0000 UTC m=+21.165494921" Jun 20 18:27:35.174100 kubelet[3415]: E0620 18:27:35.174065 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.174100 kubelet[3415]: W0620 18:27:35.174091 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.174100 kubelet[3415]: E0620 18:27:35.174111 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.174345 kubelet[3415]: E0620 18:27:35.174250 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.174345 kubelet[3415]: W0620 18:27:35.174257 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.174345 kubelet[3415]: E0620 18:27:35.174323 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.174444 kubelet[3415]: E0620 18:27:35.174432 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.174444 kubelet[3415]: W0620 18:27:35.174438 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.174444 kubelet[3415]: E0620 18:27:35.174444 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.174624 kubelet[3415]: E0620 18:27:35.174611 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.174624 kubelet[3415]: W0620 18:27:35.174622 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.174690 kubelet[3415]: E0620 18:27:35.174629 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.174750 kubelet[3415]: E0620 18:27:35.174739 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.174750 kubelet[3415]: W0620 18:27:35.174745 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.174750 kubelet[3415]: E0620 18:27:35.174751 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.174932 kubelet[3415]: E0620 18:27:35.174920 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.174932 kubelet[3415]: W0620 18:27:35.174929 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.174932 kubelet[3415]: E0620 18:27:35.174937 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.175055 kubelet[3415]: E0620 18:27:35.175027 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.175055 kubelet[3415]: W0620 18:27:35.175032 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.175055 kubelet[3415]: E0620 18:27:35.175037 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.175177 kubelet[3415]: E0620 18:27:35.175149 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.175177 kubelet[3415]: W0620 18:27:35.175154 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.175177 kubelet[3415]: E0620 18:27:35.175160 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.175739 kubelet[3415]: E0620 18:27:35.175257 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.175739 kubelet[3415]: W0620 18:27:35.175263 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.175739 kubelet[3415]: E0620 18:27:35.175269 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.175739 kubelet[3415]: E0620 18:27:35.175378 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.175739 kubelet[3415]: W0620 18:27:35.175383 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.175739 kubelet[3415]: E0620 18:27:35.175388 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.175739 kubelet[3415]: E0620 18:27:35.175459 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.175739 kubelet[3415]: W0620 18:27:35.175465 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.175739 kubelet[3415]: E0620 18:27:35.175470 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.175739 kubelet[3415]: E0620 18:27:35.175541 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.176046 kubelet[3415]: W0620 18:27:35.175545 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.176046 kubelet[3415]: E0620 18:27:35.175550 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.176046 kubelet[3415]: E0620 18:27:35.175702 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.176046 kubelet[3415]: W0620 18:27:35.175711 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.176046 kubelet[3415]: E0620 18:27:35.175718 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.176046 kubelet[3415]: E0620 18:27:35.175816 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.176046 kubelet[3415]: W0620 18:27:35.175822 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.176046 kubelet[3415]: E0620 18:27:35.175827 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.176226 kubelet[3415]: E0620 18:27:35.176209 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.176226 kubelet[3415]: W0620 18:27:35.176224 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.176272 kubelet[3415]: E0620 18:27:35.176234 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.268460 kubelet[3415]: E0620 18:27:35.268399 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.268460 kubelet[3415]: W0620 18:27:35.268420 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.268460 kubelet[3415]: E0620 18:27:35.268437 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.268893 kubelet[3415]: E0620 18:27:35.268847 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.268893 kubelet[3415]: W0620 18:27:35.268862 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.268893 kubelet[3415]: E0620 18:27:35.268878 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.269264 kubelet[3415]: E0620 18:27:35.269220 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.269264 kubelet[3415]: W0620 18:27:35.269233 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.269264 kubelet[3415]: E0620 18:27:35.269244 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.269502 kubelet[3415]: E0620 18:27:35.269476 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.269502 kubelet[3415]: W0620 18:27:35.269495 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.269581 kubelet[3415]: E0620 18:27:35.269508 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.269643 kubelet[3415]: E0620 18:27:35.269631 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.269643 kubelet[3415]: W0620 18:27:35.269639 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.269714 kubelet[3415]: E0620 18:27:35.269647 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.269754 kubelet[3415]: E0620 18:27:35.269738 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.269754 kubelet[3415]: W0620 18:27:35.269744 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.269754 kubelet[3415]: E0620 18:27:35.269750 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.269924 kubelet[3415]: E0620 18:27:35.269910 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.269924 kubelet[3415]: W0620 18:27:35.269920 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.270023 kubelet[3415]: E0620 18:27:35.269933 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.270300 kubelet[3415]: E0620 18:27:35.270260 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.270300 kubelet[3415]: W0620 18:27:35.270275 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.270447 kubelet[3415]: E0620 18:27:35.270384 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.270645 kubelet[3415]: E0620 18:27:35.270630 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.270724 kubelet[3415]: W0620 18:27:35.270713 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.270844 kubelet[3415]: E0620 18:27:35.270785 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.271034 kubelet[3415]: E0620 18:27:35.271022 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.271195 kubelet[3415]: W0620 18:27:35.271094 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.271195 kubelet[3415]: E0620 18:27:35.271109 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.271356 kubelet[3415]: E0620 18:27:35.271345 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.271496 kubelet[3415]: W0620 18:27:35.271399 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.271496 kubelet[3415]: E0620 18:27:35.271413 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.271719 kubelet[3415]: E0620 18:27:35.271689 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.271719 kubelet[3415]: W0620 18:27:35.271703 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.271861 kubelet[3415]: E0620 18:27:35.271787 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.272066 kubelet[3415]: E0620 18:27:35.272032 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.272066 kubelet[3415]: W0620 18:27:35.272044 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.272066 kubelet[3415]: E0620 18:27:35.272053 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.272391 kubelet[3415]: E0620 18:27:35.272368 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.272391 kubelet[3415]: W0620 18:27:35.272383 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.272391 kubelet[3415]: E0620 18:27:35.272393 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.272602 kubelet[3415]: E0620 18:27:35.272489 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.272602 kubelet[3415]: W0620 18:27:35.272495 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.272602 kubelet[3415]: E0620 18:27:35.272502 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.272602 kubelet[3415]: E0620 18:27:35.272605 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.272812 kubelet[3415]: W0620 18:27:35.272612 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.272812 kubelet[3415]: E0620 18:27:35.272618 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.272943 kubelet[3415]: E0620 18:27:35.272918 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.272943 kubelet[3415]: W0620 18:27:35.272930 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.272943 kubelet[3415]: E0620 18:27:35.272938 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.273719 kubelet[3415]: E0620 18:27:35.273702 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 18:27:35.273850 kubelet[3415]: W0620 18:27:35.273794 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 18:27:35.273850 kubelet[3415]: E0620 18:27:35.273825 3415 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 18:27:35.643713 containerd[1884]: time="2025-06-20T18:27:35.643654022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:35.647159 containerd[1884]: time="2025-06-20T18:27:35.647113801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4264319" Jun 20 18:27:35.653044 containerd[1884]: time="2025-06-20T18:27:35.652985753Z" level=info msg="ImageCreate event name:\"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:35.659607 containerd[1884]: time="2025-06-20T18:27:35.659544184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:35.660187 containerd[1884]: time="2025-06-20T18:27:35.659897728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5633520\" in 1.51255143s" Jun 20 18:27:35.660187 containerd[1884]: time="2025-06-20T18:27:35.659930648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:6f200839ca0e1e01d4b68b505fdb4df21201601c13d86418fe011a3244617bdb\"" Jun 20 18:27:35.669389 containerd[1884]: time="2025-06-20T18:27:35.669358830Z" level=info msg="CreateContainer within sandbox \"619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 18:27:35.712246 containerd[1884]: time="2025-06-20T18:27:35.711510509Z" level=info msg="Container f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:35.737804 containerd[1884]: time="2025-06-20T18:27:35.737751625Z" level=info msg="CreateContainer within sandbox \"619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5\"" Jun 20 18:27:35.739075 containerd[1884]: time="2025-06-20T18:27:35.738721503Z" level=info msg="StartContainer for \"f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5\"" Jun 20 18:27:35.740220 containerd[1884]: time="2025-06-20T18:27:35.740191631Z" level=info msg="connecting to shim f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5" address="unix:///run/containerd/s/e39c73378cc747dafb6db8943cf442520a3299d182f3c4182b494f2ab08af434" protocol=ttrpc version=3 Jun 20 18:27:35.759440 systemd[1]: Started cri-containerd-f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5.scope - libcontainer container f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5. Jun 20 18:27:35.793767 containerd[1884]: time="2025-06-20T18:27:35.793700877Z" level=info msg="StartContainer for \"f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5\" returns successfully" Jun 20 18:27:35.796512 systemd[1]: cri-containerd-f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5.scope: Deactivated successfully. Jun 20 18:27:35.801079 containerd[1884]: time="2025-06-20T18:27:35.801033565Z" level=info msg="received exit event container_id:\"f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5\" id:\"f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5\" pid:4089 exited_at:{seconds:1750444055 nanos:800515722}" Jun 20 18:27:35.801466 containerd[1884]: time="2025-06-20T18:27:35.801361812Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5\" id:\"f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5\" pid:4089 exited_at:{seconds:1750444055 nanos:800515722}" Jun 20 18:27:35.816669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4282a07728927ed4ac167da72d3362dddf67d7d319ec5653e3585fc1a6b0cb5-rootfs.mount: Deactivated successfully. Jun 20 18:27:36.044583 kubelet[3415]: E0620 18:27:36.044041 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nnz8" podUID="0a9e252c-f052-4e18-abf4-a391b1d4aaf8" Jun 20 18:27:37.126783 containerd[1884]: time="2025-06-20T18:27:37.126736084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 18:27:38.043529 kubelet[3415]: E0620 18:27:38.043480 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nnz8" podUID="0a9e252c-f052-4e18-abf4-a391b1d4aaf8" Jun 20 18:27:39.687095 containerd[1884]: time="2025-06-20T18:27:39.686588610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:39.689973 containerd[1884]: time="2025-06-20T18:27:39.689941384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=65872909" Jun 20 18:27:39.695197 containerd[1884]: time="2025-06-20T18:27:39.695172161Z" level=info msg="ImageCreate event name:\"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:39.700911 containerd[1884]: time="2025-06-20T18:27:39.700861098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:39.701382 containerd[1884]: time="2025-06-20T18:27:39.701272353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"67242150\" in 2.574492515s" Jun 20 18:27:39.701382 containerd[1884]: time="2025-06-20T18:27:39.701309538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:de950b144463fd7ea1fffd9357f354ee83b4a5191d9829bbffc11aea1a6f5e55\"" Jun 20 18:27:39.708763 containerd[1884]: time="2025-06-20T18:27:39.708737928Z" level=info msg="CreateContainer within sandbox \"619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 18:27:39.742942 containerd[1884]: time="2025-06-20T18:27:39.742895423Z" level=info msg="Container 8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:39.763410 containerd[1884]: time="2025-06-20T18:27:39.763364731Z" level=info msg="CreateContainer within sandbox \"619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b\"" Jun 20 18:27:39.764057 containerd[1884]: time="2025-06-20T18:27:39.763952575Z" level=info msg="StartContainer for \"8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b\"" Jun 20 18:27:39.765212 containerd[1884]: time="2025-06-20T18:27:39.765148498Z" level=info msg="connecting to shim 8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b" address="unix:///run/containerd/s/e39c73378cc747dafb6db8943cf442520a3299d182f3c4182b494f2ab08af434" protocol=ttrpc version=3 Jun 20 18:27:39.790429 systemd[1]: Started cri-containerd-8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b.scope - libcontainer container 8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b. Jun 20 18:27:39.824144 containerd[1884]: time="2025-06-20T18:27:39.823948696Z" level=info msg="StartContainer for \"8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b\" returns successfully" Jun 20 18:27:40.044715 kubelet[3415]: E0620 18:27:40.044581 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5nnz8" podUID="0a9e252c-f052-4e18-abf4-a391b1d4aaf8" Jun 20 18:27:41.018325 containerd[1884]: time="2025-06-20T18:27:41.018247092Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:27:41.020127 systemd[1]: cri-containerd-8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b.scope: Deactivated successfully. Jun 20 18:27:41.021184 systemd[1]: cri-containerd-8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b.scope: Consumed 314ms CPU time, 191.8M memory peak, 165.8M written to disk. Jun 20 18:27:41.022749 containerd[1884]: time="2025-06-20T18:27:41.022717818Z" level=info msg="received exit event container_id:\"8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b\" id:\"8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b\" pid:4152 exited_at:{seconds:1750444061 nanos:21644844}" Jun 20 18:27:41.023141 containerd[1884]: time="2025-06-20T18:27:41.022953610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b\" id:\"8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b\" pid:4152 exited_at:{seconds:1750444061 nanos:21644844}" Jun 20 18:27:41.040478 kubelet[3415]: I0620 18:27:41.039387 3415 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:27:41.042935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fd02d7350c381cbd8fb0570bf2a49fe50f8c9d81d2245e4cf3189d58008037b-rootfs.mount: Deactivated successfully. Jun 20 18:27:41.883902 systemd[1]: Created slice kubepods-burstable-pod9b1d823a_0ec6_496a_9a96_cd9bacc490d2.slice - libcontainer container kubepods-burstable-pod9b1d823a_0ec6_496a_9a96_cd9bacc490d2.slice. Jun 20 18:27:41.892409 systemd[1]: Created slice kubepods-burstable-podec2b0ad4_db12_493d_94fb_15a7feb27fa3.slice - libcontainer container kubepods-burstable-podec2b0ad4_db12_493d_94fb_15a7feb27fa3.slice. Jun 20 18:27:41.898260 systemd[1]: Created slice kubepods-besteffort-pod6b7f6654_d1e6_40cd_9565_29e997b59a6c.slice - libcontainer container kubepods-besteffort-pod6b7f6654_d1e6_40cd_9565_29e997b59a6c.slice. Jun 20 18:27:41.910324 systemd[1]: Created slice kubepods-besteffort-pod7ca80078_dd2c_46f4_a88b_90d011ac3ef4.slice - libcontainer container kubepods-besteffort-pod7ca80078_dd2c_46f4_a88b_90d011ac3ef4.slice. Jun 20 18:27:41.929270 systemd[1]: Created slice kubepods-besteffort-pod8ff637cd_0f12_4574_89da_90b39dbb286e.slice - libcontainer container kubepods-besteffort-pod8ff637cd_0f12_4574_89da_90b39dbb286e.slice. Jun 20 18:27:41.941666 systemd[1]: Created slice kubepods-besteffort-pod5c5ccadd_4bf0_42a4_8e0b_f165e824edfe.slice - libcontainer container kubepods-besteffort-pod5c5ccadd_4bf0_42a4_8e0b_f165e824edfe.slice. Jun 20 18:27:41.948910 systemd[1]: Created slice kubepods-besteffort-pod96d87550_e618_4e0c_8509_559394b125f8.slice - libcontainer container kubepods-besteffort-pod96d87550_e618_4e0c_8509_559394b125f8.slice. Jun 20 18:27:41.954144 systemd[1]: Created slice kubepods-besteffort-pod8223c41e_22a0_464a_b9b5_2fc0bc637177.slice - libcontainer container kubepods-besteffort-pod8223c41e_22a0_464a_b9b5_2fc0bc637177.slice. Jun 20 18:27:42.008325 kubelet[3415]: I0620 18:27:42.008223 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ca80078-dd2c-46f4-a88b-90d011ac3ef4-calico-apiserver-certs\") pod \"calico-apiserver-68657b97d-z7nsb\" (UID: \"7ca80078-dd2c-46f4-a88b-90d011ac3ef4\") " pod="calico-apiserver/calico-apiserver-68657b97d-z7nsb" Jun 20 18:27:42.008325 kubelet[3415]: I0620 18:27:42.008269 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b1d823a-0ec6-496a-9a96-cd9bacc490d2-config-volume\") pod \"coredns-674b8bbfcf-gdzhv\" (UID: \"9b1d823a-0ec6-496a-9a96-cd9bacc490d2\") " pod="kube-system/coredns-674b8bbfcf-gdzhv" Jun 20 18:27:42.008778 kubelet[3415]: I0620 18:27:42.008282 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkmfz\" (UniqueName: \"kubernetes.io/projected/ec2b0ad4-db12-493d-94fb-15a7feb27fa3-kube-api-access-rkmfz\") pod \"coredns-674b8bbfcf-992sr\" (UID: \"ec2b0ad4-db12-493d-94fb-15a7feb27fa3\") " pod="kube-system/coredns-674b8bbfcf-992sr" Jun 20 18:27:42.008856 kubelet[3415]: I0620 18:27:42.008818 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ff637cd-0f12-4574-89da-90b39dbb286e-calico-apiserver-certs\") pod \"calico-apiserver-68657b97d-rppnt\" (UID: \"8ff637cd-0f12-4574-89da-90b39dbb286e\") " pod="calico-apiserver/calico-apiserver-68657b97d-rppnt" Jun 20 18:27:42.008896 kubelet[3415]: I0620 18:27:42.008857 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zfjh\" (UniqueName: \"kubernetes.io/projected/8ff637cd-0f12-4574-89da-90b39dbb286e-kube-api-access-6zfjh\") pod \"calico-apiserver-68657b97d-rppnt\" (UID: \"8ff637cd-0f12-4574-89da-90b39dbb286e\") " pod="calico-apiserver/calico-apiserver-68657b97d-rppnt" Jun 20 18:27:42.008896 kubelet[3415]: I0620 18:27:42.008873 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec2b0ad4-db12-493d-94fb-15a7feb27fa3-config-volume\") pod \"coredns-674b8bbfcf-992sr\" (UID: \"ec2b0ad4-db12-493d-94fb-15a7feb27fa3\") " pod="kube-system/coredns-674b8bbfcf-992sr" Jun 20 18:27:42.008933 kubelet[3415]: I0620 18:27:42.008905 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b7f6654-d1e6-40cd-9565-29e997b59a6c-tigera-ca-bundle\") pod \"calico-kube-controllers-cd7745df-fdl5q\" (UID: \"6b7f6654-d1e6-40cd-9565-29e997b59a6c\") " pod="calico-system/calico-kube-controllers-cd7745df-fdl5q" Jun 20 18:27:42.008933 kubelet[3415]: I0620 18:27:42.008918 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjtg\" (UniqueName: \"kubernetes.io/projected/6b7f6654-d1e6-40cd-9565-29e997b59a6c-kube-api-access-hrjtg\") pod \"calico-kube-controllers-cd7745df-fdl5q\" (UID: \"6b7f6654-d1e6-40cd-9565-29e997b59a6c\") " pod="calico-system/calico-kube-controllers-cd7745df-fdl5q" Jun 20 18:27:42.008999 kubelet[3415]: I0620 18:27:42.008940 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ldnm\" (UniqueName: \"kubernetes.io/projected/7ca80078-dd2c-46f4-a88b-90d011ac3ef4-kube-api-access-9ldnm\") pod \"calico-apiserver-68657b97d-z7nsb\" (UID: \"7ca80078-dd2c-46f4-a88b-90d011ac3ef4\") " pod="calico-apiserver/calico-apiserver-68657b97d-z7nsb" Jun 20 18:27:42.008999 kubelet[3415]: I0620 18:27:42.008952 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcf88\" (UniqueName: \"kubernetes.io/projected/9b1d823a-0ec6-496a-9a96-cd9bacc490d2-kube-api-access-qcf88\") pod \"coredns-674b8bbfcf-gdzhv\" (UID: \"9b1d823a-0ec6-496a-9a96-cd9bacc490d2\") " pod="kube-system/coredns-674b8bbfcf-gdzhv" Jun 20 18:27:42.051121 systemd[1]: Created slice kubepods-besteffort-pod0a9e252c_f052_4e18_abf4_a391b1d4aaf8.slice - libcontainer container kubepods-besteffort-pod0a9e252c_f052_4e18_abf4_a391b1d4aaf8.slice. Jun 20 18:27:42.053227 containerd[1884]: time="2025-06-20T18:27:42.053152461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5nnz8,Uid:0a9e252c-f052-4e18-abf4-a391b1d4aaf8,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:42.096494 containerd[1884]: time="2025-06-20T18:27:42.096443031Z" level=error msg="Failed to destroy network for sandbox \"514dcc2e99d32f7da69c22ccb85b634d4b2ce2bf5aa8189463463e4947fefd3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.097924 systemd[1]: run-netns-cni\x2d76c5534a\x2d9688\x2d7297\x2db36e\x2da8982aacc4d5.mount: Deactivated successfully. Jun 20 18:27:42.105900 containerd[1884]: time="2025-06-20T18:27:42.105840762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5nnz8,Uid:0a9e252c-f052-4e18-abf4-a391b1d4aaf8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"514dcc2e99d32f7da69c22ccb85b634d4b2ce2bf5aa8189463463e4947fefd3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.106235 kubelet[3415]: E0620 18:27:42.106187 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514dcc2e99d32f7da69c22ccb85b634d4b2ce2bf5aa8189463463e4947fefd3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.106307 kubelet[3415]: E0620 18:27:42.106261 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514dcc2e99d32f7da69c22ccb85b634d4b2ce2bf5aa8189463463e4947fefd3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5nnz8" Jun 20 18:27:42.106307 kubelet[3415]: E0620 18:27:42.106278 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"514dcc2e99d32f7da69c22ccb85b634d4b2ce2bf5aa8189463463e4947fefd3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5nnz8" Jun 20 18:27:42.106362 kubelet[3415]: E0620 18:27:42.106343 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5nnz8_calico-system(0a9e252c-f052-4e18-abf4-a391b1d4aaf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5nnz8_calico-system(0a9e252c-f052-4e18-abf4-a391b1d4aaf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"514dcc2e99d32f7da69c22ccb85b634d4b2ce2bf5aa8189463463e4947fefd3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5nnz8" podUID="0a9e252c-f052-4e18-abf4-a391b1d4aaf8" Jun 20 18:27:42.109525 kubelet[3415]: I0620 18:27:42.109439 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n425b\" (UniqueName: \"kubernetes.io/projected/96d87550-e618-4e0c-8509-559394b125f8-kube-api-access-n425b\") pod \"calico-apiserver-b4c847979-l4jsd\" (UID: \"96d87550-e618-4e0c-8509-559394b125f8\") " pod="calico-apiserver/calico-apiserver-b4c847979-l4jsd" Jun 20 18:27:42.109525 kubelet[3415]: I0620 18:27:42.109476 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-whisker-ca-bundle\") pod \"whisker-6b868cc4f-4cjbq\" (UID: \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\") " pod="calico-system/whisker-6b868cc4f-4cjbq" Jun 20 18:27:42.109728 kubelet[3415]: I0620 18:27:42.109712 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8223c41e-22a0-464a-b9b5-2fc0bc637177-config\") pod \"goldmane-5bd85449d4-l8dbl\" (UID: \"8223c41e-22a0-464a-b9b5-2fc0bc637177\") " pod="calico-system/goldmane-5bd85449d4-l8dbl" Jun 20 18:27:42.109805 kubelet[3415]: I0620 18:27:42.109795 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8223c41e-22a0-464a-b9b5-2fc0bc637177-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-l8dbl\" (UID: \"8223c41e-22a0-464a-b9b5-2fc0bc637177\") " pod="calico-system/goldmane-5bd85449d4-l8dbl" Jun 20 18:27:42.109870 kubelet[3415]: I0620 18:27:42.109854 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8223c41e-22a0-464a-b9b5-2fc0bc637177-goldmane-key-pair\") pod \"goldmane-5bd85449d4-l8dbl\" (UID: \"8223c41e-22a0-464a-b9b5-2fc0bc637177\") " pod="calico-system/goldmane-5bd85449d4-l8dbl" Jun 20 18:27:42.109930 kubelet[3415]: I0620 18:27:42.109920 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-whisker-backend-key-pair\") pod \"whisker-6b868cc4f-4cjbq\" (UID: \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\") " pod="calico-system/whisker-6b868cc4f-4cjbq" Jun 20 18:27:42.109989 kubelet[3415]: I0620 18:27:42.109979 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96d87550-e618-4e0c-8509-559394b125f8-calico-apiserver-certs\") pod \"calico-apiserver-b4c847979-l4jsd\" (UID: \"96d87550-e618-4e0c-8509-559394b125f8\") " pod="calico-apiserver/calico-apiserver-b4c847979-l4jsd" Jun 20 18:27:42.110056 kubelet[3415]: I0620 18:27:42.110045 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2n45\" (UniqueName: \"kubernetes.io/projected/8223c41e-22a0-464a-b9b5-2fc0bc637177-kube-api-access-t2n45\") pod \"goldmane-5bd85449d4-l8dbl\" (UID: \"8223c41e-22a0-464a-b9b5-2fc0bc637177\") " pod="calico-system/goldmane-5bd85449d4-l8dbl" Jun 20 18:27:42.110113 kubelet[3415]: I0620 18:27:42.110103 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqc8z\" (UniqueName: \"kubernetes.io/projected/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-kube-api-access-cqc8z\") pod \"whisker-6b868cc4f-4cjbq\" (UID: \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\") " pod="calico-system/whisker-6b868cc4f-4cjbq" Jun 20 18:27:42.142463 containerd[1884]: time="2025-06-20T18:27:42.141914138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 18:27:42.188191 containerd[1884]: time="2025-06-20T18:27:42.188147602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gdzhv,Uid:9b1d823a-0ec6-496a-9a96-cd9bacc490d2,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:42.195970 containerd[1884]: time="2025-06-20T18:27:42.195922126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-992sr,Uid:ec2b0ad4-db12-493d-94fb-15a7feb27fa3,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:42.203100 containerd[1884]: time="2025-06-20T18:27:42.203056246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd7745df-fdl5q,Uid:6b7f6654-d1e6-40cd-9565-29e997b59a6c,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:42.213712 containerd[1884]: time="2025-06-20T18:27:42.213645228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68657b97d-z7nsb,Uid:7ca80078-dd2c-46f4-a88b-90d011ac3ef4,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:42.239096 containerd[1884]: time="2025-06-20T18:27:42.239047427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68657b97d-rppnt,Uid:8ff637cd-0f12-4574-89da-90b39dbb286e,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:42.249709 containerd[1884]: time="2025-06-20T18:27:42.249558583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b868cc4f-4cjbq,Uid:5c5ccadd-4bf0-42a4-8e0b-f165e824edfe,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:42.251997 containerd[1884]: time="2025-06-20T18:27:42.251969277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4c847979-l4jsd,Uid:96d87550-e618-4e0c-8509-559394b125f8,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:42.258359 containerd[1884]: time="2025-06-20T18:27:42.258193283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-l8dbl,Uid:8223c41e-22a0-464a-b9b5-2fc0bc637177,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:42.268808 containerd[1884]: time="2025-06-20T18:27:42.268768233Z" level=error msg="Failed to destroy network for sandbox \"d4d3f4bc1d294a86d3d5a2b6adcd9f4ba604e8f65f9b594a124f429a7df433af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.298324 containerd[1884]: time="2025-06-20T18:27:42.298264896Z" level=error msg="Failed to destroy network for sandbox \"a6ad5f06b4d788e6e2ce5b158603896ef2bda0c6481f870398abe619478a44a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.311319 containerd[1884]: time="2025-06-20T18:27:42.311185962Z" level=error msg="Failed to destroy network for sandbox \"b08a42d2fe0b1ba41275e1ce1ce5441d80633576ac675463b9adb7ee884f9816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.314476 containerd[1884]: time="2025-06-20T18:27:42.314413232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gdzhv,Uid:9b1d823a-0ec6-496a-9a96-cd9bacc490d2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4d3f4bc1d294a86d3d5a2b6adcd9f4ba604e8f65f9b594a124f429a7df433af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.314806 kubelet[3415]: E0620 18:27:42.314770 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4d3f4bc1d294a86d3d5a2b6adcd9f4ba604e8f65f9b594a124f429a7df433af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.314894 kubelet[3415]: E0620 18:27:42.314821 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4d3f4bc1d294a86d3d5a2b6adcd9f4ba604e8f65f9b594a124f429a7df433af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gdzhv" Jun 20 18:27:42.314894 kubelet[3415]: E0620 18:27:42.314837 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4d3f4bc1d294a86d3d5a2b6adcd9f4ba604e8f65f9b594a124f429a7df433af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gdzhv" Jun 20 18:27:42.314894 kubelet[3415]: E0620 18:27:42.314878 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gdzhv_kube-system(9b1d823a-0ec6-496a-9a96-cd9bacc490d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gdzhv_kube-system(9b1d823a-0ec6-496a-9a96-cd9bacc490d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4d3f4bc1d294a86d3d5a2b6adcd9f4ba604e8f65f9b594a124f429a7df433af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gdzhv" podUID="9b1d823a-0ec6-496a-9a96-cd9bacc490d2" Jun 20 18:27:42.338997 containerd[1884]: time="2025-06-20T18:27:42.338953102Z" level=error msg="Failed to destroy network for sandbox \"0bcbbcba121b6297d67ed11e0f9f4d9680186cb6dbc52a9298401dc9786e7425\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.351066 containerd[1884]: time="2025-06-20T18:27:42.350833458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-992sr,Uid:ec2b0ad4-db12-493d-94fb-15a7feb27fa3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ad5f06b4d788e6e2ce5b158603896ef2bda0c6481f870398abe619478a44a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.351380 kubelet[3415]: E0620 18:27:42.351330 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ad5f06b4d788e6e2ce5b158603896ef2bda0c6481f870398abe619478a44a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.351449 kubelet[3415]: E0620 18:27:42.351399 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ad5f06b4d788e6e2ce5b158603896ef2bda0c6481f870398abe619478a44a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-992sr" Jun 20 18:27:42.351449 kubelet[3415]: E0620 18:27:42.351419 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6ad5f06b4d788e6e2ce5b158603896ef2bda0c6481f870398abe619478a44a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-992sr" Jun 20 18:27:42.351507 kubelet[3415]: E0620 18:27:42.351465 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-992sr_kube-system(ec2b0ad4-db12-493d-94fb-15a7feb27fa3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-992sr_kube-system(ec2b0ad4-db12-493d-94fb-15a7feb27fa3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6ad5f06b4d788e6e2ce5b158603896ef2bda0c6481f870398abe619478a44a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-992sr" podUID="ec2b0ad4-db12-493d-94fb-15a7feb27fa3" Jun 20 18:27:42.365446 containerd[1884]: time="2025-06-20T18:27:42.365385027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd7745df-fdl5q,Uid:6b7f6654-d1e6-40cd-9565-29e997b59a6c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08a42d2fe0b1ba41275e1ce1ce5441d80633576ac675463b9adb7ee884f9816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.366054 kubelet[3415]: E0620 18:27:42.365917 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08a42d2fe0b1ba41275e1ce1ce5441d80633576ac675463b9adb7ee884f9816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.366054 kubelet[3415]: E0620 18:27:42.365986 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08a42d2fe0b1ba41275e1ce1ce5441d80633576ac675463b9adb7ee884f9816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd7745df-fdl5q" Jun 20 18:27:42.366054 kubelet[3415]: E0620 18:27:42.366004 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b08a42d2fe0b1ba41275e1ce1ce5441d80633576ac675463b9adb7ee884f9816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd7745df-fdl5q" Jun 20 18:27:42.366494 kubelet[3415]: E0620 18:27:42.366108 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd7745df-fdl5q_calico-system(6b7f6654-d1e6-40cd-9565-29e997b59a6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd7745df-fdl5q_calico-system(6b7f6654-d1e6-40cd-9565-29e997b59a6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b08a42d2fe0b1ba41275e1ce1ce5441d80633576ac675463b9adb7ee884f9816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd7745df-fdl5q" podUID="6b7f6654-d1e6-40cd-9565-29e997b59a6c" Jun 20 18:27:42.371025 containerd[1884]: time="2025-06-20T18:27:42.370828434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68657b97d-z7nsb,Uid:7ca80078-dd2c-46f4-a88b-90d011ac3ef4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcbbcba121b6297d67ed11e0f9f4d9680186cb6dbc52a9298401dc9786e7425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.373312 kubelet[3415]: E0620 18:27:42.372576 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcbbcba121b6297d67ed11e0f9f4d9680186cb6dbc52a9298401dc9786e7425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.373312 kubelet[3415]: E0620 18:27:42.372628 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcbbcba121b6297d67ed11e0f9f4d9680186cb6dbc52a9298401dc9786e7425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68657b97d-z7nsb" Jun 20 18:27:42.373312 kubelet[3415]: E0620 18:27:42.372646 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bcbbcba121b6297d67ed11e0f9f4d9680186cb6dbc52a9298401dc9786e7425\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68657b97d-z7nsb" Jun 20 18:27:42.373439 kubelet[3415]: E0620 18:27:42.372682 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68657b97d-z7nsb_calico-apiserver(7ca80078-dd2c-46f4-a88b-90d011ac3ef4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68657b97d-z7nsb_calico-apiserver(7ca80078-dd2c-46f4-a88b-90d011ac3ef4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bcbbcba121b6297d67ed11e0f9f4d9680186cb6dbc52a9298401dc9786e7425\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68657b97d-z7nsb" podUID="7ca80078-dd2c-46f4-a88b-90d011ac3ef4" Jun 20 18:27:42.393490 containerd[1884]: time="2025-06-20T18:27:42.393343885Z" level=error msg="Failed to destroy network for sandbox \"f75f86ce915944114c3b96ca82e9134ce3327b9ab4774ee6e5030dc6bd928c2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.398143 containerd[1884]: time="2025-06-20T18:27:42.398089360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68657b97d-rppnt,Uid:8ff637cd-0f12-4574-89da-90b39dbb286e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f75f86ce915944114c3b96ca82e9134ce3327b9ab4774ee6e5030dc6bd928c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.398460 kubelet[3415]: E0620 18:27:42.398426 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f75f86ce915944114c3b96ca82e9134ce3327b9ab4774ee6e5030dc6bd928c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.398710 kubelet[3415]: E0620 18:27:42.398688 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f75f86ce915944114c3b96ca82e9134ce3327b9ab4774ee6e5030dc6bd928c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68657b97d-rppnt" Jun 20 18:27:42.399784 kubelet[3415]: E0620 18:27:42.398784 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f75f86ce915944114c3b96ca82e9134ce3327b9ab4774ee6e5030dc6bd928c2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68657b97d-rppnt" Jun 20 18:27:42.399784 kubelet[3415]: E0620 18:27:42.398844 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68657b97d-rppnt_calico-apiserver(8ff637cd-0f12-4574-89da-90b39dbb286e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68657b97d-rppnt_calico-apiserver(8ff637cd-0f12-4574-89da-90b39dbb286e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f75f86ce915944114c3b96ca82e9134ce3327b9ab4774ee6e5030dc6bd928c2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68657b97d-rppnt" podUID="8ff637cd-0f12-4574-89da-90b39dbb286e" Jun 20 18:27:42.406543 containerd[1884]: time="2025-06-20T18:27:42.406492446Z" level=error msg="Failed to destroy network for sandbox \"9c34df7d5e5b6d60e9edbe9ad37467ac57c4c6b820780f72c0d068a8aada3c9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.412451 containerd[1884]: time="2025-06-20T18:27:42.412398339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b868cc4f-4cjbq,Uid:5c5ccadd-4bf0-42a4-8e0b-f165e824edfe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c34df7d5e5b6d60e9edbe9ad37467ac57c4c6b820780f72c0d068a8aada3c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.412757 kubelet[3415]: E0620 18:27:42.412704 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c34df7d5e5b6d60e9edbe9ad37467ac57c4c6b820780f72c0d068a8aada3c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.412846 kubelet[3415]: E0620 18:27:42.412758 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c34df7d5e5b6d60e9edbe9ad37467ac57c4c6b820780f72c0d068a8aada3c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b868cc4f-4cjbq" Jun 20 18:27:42.412846 kubelet[3415]: E0620 18:27:42.412774 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c34df7d5e5b6d60e9edbe9ad37467ac57c4c6b820780f72c0d068a8aada3c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b868cc4f-4cjbq" Jun 20 18:27:42.412846 kubelet[3415]: E0620 18:27:42.412816 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b868cc4f-4cjbq_calico-system(5c5ccadd-4bf0-42a4-8e0b-f165e824edfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b868cc4f-4cjbq_calico-system(5c5ccadd-4bf0-42a4-8e0b-f165e824edfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c34df7d5e5b6d60e9edbe9ad37467ac57c4c6b820780f72c0d068a8aada3c9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b868cc4f-4cjbq" podUID="5c5ccadd-4bf0-42a4-8e0b-f165e824edfe" Jun 20 18:27:42.420682 containerd[1884]: time="2025-06-20T18:27:42.420571802Z" level=error msg="Failed to destroy network for sandbox \"6979086e18b36bbe8f9b83649c036bf0500fc791307690ef33faa9fa07549e5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.426653 containerd[1884]: time="2025-06-20T18:27:42.426603738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4c847979-l4jsd,Uid:96d87550-e618-4e0c-8509-559394b125f8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6979086e18b36bbe8f9b83649c036bf0500fc791307690ef33faa9fa07549e5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.427187 kubelet[3415]: E0620 18:27:42.426830 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6979086e18b36bbe8f9b83649c036bf0500fc791307690ef33faa9fa07549e5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.427187 kubelet[3415]: E0620 18:27:42.426881 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6979086e18b36bbe8f9b83649c036bf0500fc791307690ef33faa9fa07549e5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4c847979-l4jsd" Jun 20 18:27:42.427187 kubelet[3415]: E0620 18:27:42.426897 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6979086e18b36bbe8f9b83649c036bf0500fc791307690ef33faa9fa07549e5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b4c847979-l4jsd" Jun 20 18:27:42.427329 kubelet[3415]: E0620 18:27:42.426930 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b4c847979-l4jsd_calico-apiserver(96d87550-e618-4e0c-8509-559394b125f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b4c847979-l4jsd_calico-apiserver(96d87550-e618-4e0c-8509-559394b125f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6979086e18b36bbe8f9b83649c036bf0500fc791307690ef33faa9fa07549e5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b4c847979-l4jsd" podUID="96d87550-e618-4e0c-8509-559394b125f8" Jun 20 18:27:42.428721 containerd[1884]: time="2025-06-20T18:27:42.428685455Z" level=error msg="Failed to destroy network for sandbox \"c3c21142410f043177f02b277ee0d9a48a39431c47ee2f3b2bd16541c9772322\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.432505 containerd[1884]: time="2025-06-20T18:27:42.432433381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-l8dbl,Uid:8223c41e-22a0-464a-b9b5-2fc0bc637177,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c21142410f043177f02b277ee0d9a48a39431c47ee2f3b2bd16541c9772322\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.432917 kubelet[3415]: E0620 18:27:42.432778 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c21142410f043177f02b277ee0d9a48a39431c47ee2f3b2bd16541c9772322\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 18:27:42.432917 kubelet[3415]: E0620 18:27:42.432834 3415 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c21142410f043177f02b277ee0d9a48a39431c47ee2f3b2bd16541c9772322\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-l8dbl" Jun 20 18:27:42.432917 kubelet[3415]: E0620 18:27:42.432848 3415 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3c21142410f043177f02b277ee0d9a48a39431c47ee2f3b2bd16541c9772322\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-l8dbl" Jun 20 18:27:42.433028 kubelet[3415]: E0620 18:27:42.432884 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-l8dbl_calico-system(8223c41e-22a0-464a-b9b5-2fc0bc637177)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-l8dbl_calico-system(8223c41e-22a0-464a-b9b5-2fc0bc637177)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3c21142410f043177f02b277ee0d9a48a39431c47ee2f3b2bd16541c9772322\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-l8dbl" podUID="8223c41e-22a0-464a-b9b5-2fc0bc637177" Jun 20 18:27:46.009690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857865822.mount: Deactivated successfully. Jun 20 18:27:46.704300 containerd[1884]: time="2025-06-20T18:27:46.704238804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:46.709015 containerd[1884]: time="2025-06-20T18:27:46.708978141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=150542367" Jun 20 18:27:46.712637 containerd[1884]: time="2025-06-20T18:27:46.712576952Z" level=info msg="ImageCreate event name:\"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:46.718458 containerd[1884]: time="2025-06-20T18:27:46.718405974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:46.718848 containerd[1884]: time="2025-06-20T18:27:46.718652606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"150542229\" in 4.576687955s" Jun 20 18:27:46.718848 containerd[1884]: time="2025-06-20T18:27:46.718679951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:d69e29506cd22411842a12828780c46b7599ce1233feed8a045732bfbdefdb66\"" Jun 20 18:27:46.744373 containerd[1884]: time="2025-06-20T18:27:46.744327264Z" level=info msg="CreateContainer within sandbox \"619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 18:27:46.776162 containerd[1884]: time="2025-06-20T18:27:46.776118490Z" level=info msg="Container 11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:46.800491 containerd[1884]: time="2025-06-20T18:27:46.800422373Z" level=info msg="CreateContainer within sandbox \"619f3d79aed5d924dfa20e24a063f65f8bdd33dd62f74b1414788ff092d3f7c5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\"" Jun 20 18:27:46.801736 containerd[1884]: time="2025-06-20T18:27:46.801525698Z" level=info msg="StartContainer for \"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\"" Jun 20 18:27:46.802868 containerd[1884]: time="2025-06-20T18:27:46.802833407Z" level=info msg="connecting to shim 11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96" address="unix:///run/containerd/s/e39c73378cc747dafb6db8943cf442520a3299d182f3c4182b494f2ab08af434" protocol=ttrpc version=3 Jun 20 18:27:46.821433 systemd[1]: Started cri-containerd-11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96.scope - libcontainer container 11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96. Jun 20 18:27:46.869097 containerd[1884]: time="2025-06-20T18:27:46.868953848Z" level=info msg="StartContainer for \"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\" returns successfully" Jun 20 18:27:47.118853 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 18:27:47.118978 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 18:27:47.172312 kubelet[3415]: I0620 18:27:47.172096 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sps5r" podStartSLOduration=1.917486874 podStartE2EDuration="16.171991991s" podCreationTimestamp="2025-06-20 18:27:31 +0000 UTC" firstStartedPulling="2025-06-20 18:27:32.464892211 +0000 UTC m=+18.500348238" lastFinishedPulling="2025-06-20 18:27:46.719397336 +0000 UTC m=+32.754853355" observedRunningTime="2025-06-20 18:27:47.170047893 +0000 UTC m=+33.205503912" watchObservedRunningTime="2025-06-20 18:27:47.171991991 +0000 UTC m=+33.207448010" Jun 20 18:27:47.341156 kubelet[3415]: I0620 18:27:47.341100 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-whisker-ca-bundle\") pod \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\" (UID: \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\") " Jun 20 18:27:47.341156 kubelet[3415]: I0620 18:27:47.341152 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqc8z\" (UniqueName: \"kubernetes.io/projected/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-kube-api-access-cqc8z\") pod \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\" (UID: \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\") " Jun 20 18:27:47.341156 kubelet[3415]: I0620 18:27:47.341168 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-whisker-backend-key-pair\") pod \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\" (UID: \"5c5ccadd-4bf0-42a4-8e0b-f165e824edfe\") " Jun 20 18:27:47.341733 kubelet[3415]: I0620 18:27:47.341652 3415 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5c5ccadd-4bf0-42a4-8e0b-f165e824edfe" (UID: "5c5ccadd-4bf0-42a4-8e0b-f165e824edfe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:27:47.347394 systemd[1]: var-lib-kubelet-pods-5c5ccadd\x2d4bf0\x2d42a4\x2d8e0b\x2df165e824edfe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 18:27:47.349722 kubelet[3415]: I0620 18:27:47.349664 3415 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5c5ccadd-4bf0-42a4-8e0b-f165e824edfe" (UID: "5c5ccadd-4bf0-42a4-8e0b-f165e824edfe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:27:47.350192 systemd[1]: var-lib-kubelet-pods-5c5ccadd\x2d4bf0\x2d42a4\x2d8e0b\x2df165e824edfe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqc8z.mount: Deactivated successfully. Jun 20 18:27:47.351518 kubelet[3415]: I0620 18:27:47.350566 3415 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-kube-api-access-cqc8z" (OuterVolumeSpecName: "kube-api-access-cqc8z") pod "5c5ccadd-4bf0-42a4-8e0b-f165e824edfe" (UID: "5c5ccadd-4bf0-42a4-8e0b-f165e824edfe"). InnerVolumeSpecName "kube-api-access-cqc8z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:27:47.442575 kubelet[3415]: I0620 18:27:47.442429 3415 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cqc8z\" (UniqueName: \"kubernetes.io/projected/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-kube-api-access-cqc8z\") on node \"ci-4344.1.0-a-442b0d77ef\" DevicePath \"\"" Jun 20 18:27:47.442575 kubelet[3415]: I0620 18:27:47.442466 3415 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-whisker-backend-key-pair\") on node \"ci-4344.1.0-a-442b0d77ef\" DevicePath \"\"" Jun 20 18:27:47.442575 kubelet[3415]: I0620 18:27:47.442474 3415 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe-whisker-ca-bundle\") on node \"ci-4344.1.0-a-442b0d77ef\" DevicePath \"\"" Jun 20 18:27:48.049170 systemd[1]: Removed slice kubepods-besteffort-pod5c5ccadd_4bf0_42a4_8e0b_f165e824edfe.slice - libcontainer container kubepods-besteffort-pod5c5ccadd_4bf0_42a4_8e0b_f165e824edfe.slice. Jun 20 18:27:48.241542 systemd[1]: Created slice kubepods-besteffort-poddcc2d5dc_42df_44d2_ab6f_40dc798723ba.slice - libcontainer container kubepods-besteffort-poddcc2d5dc_42df_44d2_ab6f_40dc798723ba.slice. Jun 20 18:27:48.245948 kubelet[3415]: I0620 18:27:48.245831 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dcc2d5dc-42df-44d2-ab6f-40dc798723ba-whisker-backend-key-pair\") pod \"whisker-795c746798-n4wpx\" (UID: \"dcc2d5dc-42df-44d2-ab6f-40dc798723ba\") " pod="calico-system/whisker-795c746798-n4wpx" Jun 20 18:27:48.246692 kubelet[3415]: I0620 18:27:48.246030 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcc2d5dc-42df-44d2-ab6f-40dc798723ba-whisker-ca-bundle\") pod \"whisker-795c746798-n4wpx\" (UID: \"dcc2d5dc-42df-44d2-ab6f-40dc798723ba\") " pod="calico-system/whisker-795c746798-n4wpx" Jun 20 18:27:48.246692 kubelet[3415]: I0620 18:27:48.246070 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6p79\" (UniqueName: \"kubernetes.io/projected/dcc2d5dc-42df-44d2-ab6f-40dc798723ba-kube-api-access-j6p79\") pod \"whisker-795c746798-n4wpx\" (UID: \"dcc2d5dc-42df-44d2-ab6f-40dc798723ba\") " pod="calico-system/whisker-795c746798-n4wpx" Jun 20 18:27:48.545084 containerd[1884]: time="2025-06-20T18:27:48.545041918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795c746798-n4wpx,Uid:dcc2d5dc-42df-44d2-ab6f-40dc798723ba,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:48.794282 systemd-networkd[1485]: calie67a071c1be: Link UP Jun 20 18:27:48.795270 systemd-networkd[1485]: calie67a071c1be: Gained carrier Jun 20 18:27:48.813066 containerd[1884]: 2025-06-20 18:27:48.600 [INFO][4596] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 18:27:48.813066 containerd[1884]: 2025-06-20 18:27:48.621 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0 whisker-795c746798- calico-system dcc2d5dc-42df-44d2-ab6f-40dc798723ba 929 0 2025-06-20 18:27:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:795c746798 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef whisker-795c746798-n4wpx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie67a071c1be [] [] }} ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Namespace="calico-system" Pod="whisker-795c746798-n4wpx" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-" Jun 20 18:27:48.813066 containerd[1884]: 2025-06-20 18:27:48.621 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Namespace="calico-system" Pod="whisker-795c746798-n4wpx" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" Jun 20 18:27:48.813066 containerd[1884]: 2025-06-20 18:27:48.651 [INFO][4610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" HandleID="k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Workload="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.652 [INFO][4610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" HandleID="k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Workload="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"whisker-795c746798-n4wpx", "timestamp":"2025-06-20 18:27:48.651943435 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.652 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.652 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.652 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.659 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.664 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.668 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.670 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813273 containerd[1884]: 2025-06-20 18:27:48.671 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813540 containerd[1884]: 2025-06-20 18:27:48.672 [INFO][4610] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813540 containerd[1884]: 2025-06-20 18:27:48.673 [INFO][4610] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa Jun 20 18:27:48.813540 containerd[1884]: 2025-06-20 18:27:48.680 [INFO][4610] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813540 containerd[1884]: 2025-06-20 18:27:48.692 [INFO][4610] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.193/26] block=192.168.20.192/26 handle="k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813540 containerd[1884]: 2025-06-20 18:27:48.692 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.193/26] handle="k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:48.813540 containerd[1884]: 2025-06-20 18:27:48.692 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:48.813540 containerd[1884]: 2025-06-20 18:27:48.692 [INFO][4610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.193/26] IPv6=[] ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" HandleID="k8s-pod-network.29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Workload="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" Jun 20 18:27:48.813655 containerd[1884]: 2025-06-20 18:27:48.697 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Namespace="calico-system" Pod="whisker-795c746798-n4wpx" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0", GenerateName:"whisker-795c746798-", Namespace:"calico-system", SelfLink:"", UID:"dcc2d5dc-42df-44d2-ab6f-40dc798723ba", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"795c746798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"whisker-795c746798-n4wpx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie67a071c1be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:48.813655 containerd[1884]: 2025-06-20 18:27:48.698 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.193/32] ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Namespace="calico-system" Pod="whisker-795c746798-n4wpx" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" Jun 20 18:27:48.813710 containerd[1884]: 2025-06-20 18:27:48.698 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie67a071c1be ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Namespace="calico-system" Pod="whisker-795c746798-n4wpx" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" Jun 20 18:27:48.813710 containerd[1884]: 2025-06-20 18:27:48.794 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Namespace="calico-system" Pod="whisker-795c746798-n4wpx" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" Jun 20 18:27:48.813749 containerd[1884]: 2025-06-20 18:27:48.795 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Namespace="calico-system" Pod="whisker-795c746798-n4wpx" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0", GenerateName:"whisker-795c746798-", Namespace:"calico-system", SelfLink:"", UID:"dcc2d5dc-42df-44d2-ab6f-40dc798723ba", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"795c746798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa", Pod:"whisker-795c746798-n4wpx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie67a071c1be", MAC:"ea:e8:ce:6b:37:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:48.813789 containerd[1884]: 2025-06-20 18:27:48.810 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" Namespace="calico-system" Pod="whisker-795c746798-n4wpx" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-whisker--795c746798--n4wpx-eth0" Jun 20 18:27:48.876728 containerd[1884]: time="2025-06-20T18:27:48.876650865Z" level=info msg="connecting to shim 29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa" address="unix:///run/containerd/s/79b96b168e77756eae6b06062eae02b6e2a88cff8389e7b1850810937f11a6f3" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:48.906422 systemd[1]: Started cri-containerd-29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa.scope - libcontainer container 29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa. Jun 20 18:27:48.941644 containerd[1884]: time="2025-06-20T18:27:48.941605635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795c746798-n4wpx,Uid:dcc2d5dc-42df-44d2-ab6f-40dc798723ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa\"" Jun 20 18:27:48.944021 containerd[1884]: time="2025-06-20T18:27:48.943967227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 18:27:49.198945 systemd-networkd[1485]: vxlan.calico: Link UP Jun 20 18:27:49.198950 systemd-networkd[1485]: vxlan.calico: Gained carrier Jun 20 18:27:50.045692 kubelet[3415]: I0620 18:27:50.045648 3415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5ccadd-4bf0-42a4-8e0b-f165e824edfe" path="/var/lib/kubelet/pods/5c5ccadd-4bf0-42a4-8e0b-f165e824edfe/volumes" Jun 20 18:27:50.101550 systemd-networkd[1485]: calie67a071c1be: Gained IPv6LL Jun 20 18:27:50.268616 containerd[1884]: time="2025-06-20T18:27:50.268559679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:50.272158 containerd[1884]: time="2025-06-20T18:27:50.271974855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4605623" Jun 20 18:27:50.277337 containerd[1884]: time="2025-06-20T18:27:50.277308904Z" level=info msg="ImageCreate event name:\"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:50.284918 containerd[1884]: time="2025-06-20T18:27:50.284396726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:50.284918 containerd[1884]: time="2025-06-20T18:27:50.284805729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"5974856\" in 1.340811973s" Jun 20 18:27:50.284918 containerd[1884]: time="2025-06-20T18:27:50.284831633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:b76f43d4d1ac8d1d2f5e1adfe3cf6f3a9771ee05a9e8833d409d7938a9304a21\"" Jun 20 18:27:50.294271 containerd[1884]: time="2025-06-20T18:27:50.294235963Z" level=info msg="CreateContainer within sandbox \"29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 18:27:50.339094 containerd[1884]: time="2025-06-20T18:27:50.338509629Z" level=info msg="Container 6d3bdf0d8e4224212cf0c94120c84478d6e2892e2b3f9ab8cad499e00114efc6: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:50.367572 containerd[1884]: time="2025-06-20T18:27:50.367518015Z" level=info msg="CreateContainer within sandbox \"29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6d3bdf0d8e4224212cf0c94120c84478d6e2892e2b3f9ab8cad499e00114efc6\"" Jun 20 18:27:50.370009 containerd[1884]: time="2025-06-20T18:27:50.369985502Z" level=info msg="StartContainer for \"6d3bdf0d8e4224212cf0c94120c84478d6e2892e2b3f9ab8cad499e00114efc6\"" Jun 20 18:27:50.371536 containerd[1884]: time="2025-06-20T18:27:50.371509501Z" level=info msg="connecting to shim 6d3bdf0d8e4224212cf0c94120c84478d6e2892e2b3f9ab8cad499e00114efc6" address="unix:///run/containerd/s/79b96b168e77756eae6b06062eae02b6e2a88cff8389e7b1850810937f11a6f3" protocol=ttrpc version=3 Jun 20 18:27:50.391444 systemd[1]: Started cri-containerd-6d3bdf0d8e4224212cf0c94120c84478d6e2892e2b3f9ab8cad499e00114efc6.scope - libcontainer container 6d3bdf0d8e4224212cf0c94120c84478d6e2892e2b3f9ab8cad499e00114efc6. Jun 20 18:27:50.427023 containerd[1884]: time="2025-06-20T18:27:50.426985287Z" level=info msg="StartContainer for \"6d3bdf0d8e4224212cf0c94120c84478d6e2892e2b3f9ab8cad499e00114efc6\" returns successfully" Jun 20 18:27:50.428662 containerd[1884]: time="2025-06-20T18:27:50.428634673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 18:27:51.189491 systemd-networkd[1485]: vxlan.calico: Gained IPv6LL Jun 20 18:27:51.774832 kubelet[3415]: I0620 18:27:51.774611 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:27:51.829736 containerd[1884]: time="2025-06-20T18:27:51.829694533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\" id:\"1b05b10e8099eaea94e48ff2151b56205c10063174013d4d38ddc6ac81bd309e\" pid:4827 exited_at:{seconds:1750444071 nanos:829094622}" Jun 20 18:27:51.902438 containerd[1884]: time="2025-06-20T18:27:51.902400298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\" id:\"795789e832c0df1e67400e38c43d4ef174ffe9d2a2b765cafd615c93783d22b5\" pid:4850 exited_at:{seconds:1750444071 nanos:902114259}" Jun 20 18:27:52.790923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40159412.mount: Deactivated successfully. Jun 20 18:27:52.895196 containerd[1884]: time="2025-06-20T18:27:52.894668866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:52.899251 containerd[1884]: time="2025-06-20T18:27:52.899202806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=30829716" Jun 20 18:27:52.907332 containerd[1884]: time="2025-06-20T18:27:52.907304103Z" level=info msg="ImageCreate event name:\"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:52.914101 containerd[1884]: time="2025-06-20T18:27:52.914057636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:52.915129 containerd[1884]: time="2025-06-20T18:27:52.914908874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"30829546\" in 2.486249928s" Jun 20 18:27:52.915129 containerd[1884]: time="2025-06-20T18:27:52.914935539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:2d14165c450f979723a8cf9c4d4436d83734f2c51a2616cc780b4860cc5a04d5\"" Jun 20 18:27:52.924058 containerd[1884]: time="2025-06-20T18:27:52.924035541Z" level=info msg="CreateContainer within sandbox \"29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 18:27:52.959583 containerd[1884]: time="2025-06-20T18:27:52.958922986Z" level=info msg="Container 70f68a813143c9c13234d567b08d279f2df0f1ea14725d481b3a15329681c64f: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:52.984200 containerd[1884]: time="2025-06-20T18:27:52.984160415Z" level=info msg="CreateContainer within sandbox \"29a37c59af439f2b99c9e98a6311a7993917d7ac891c783faf83ed85aa4ccdfa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"70f68a813143c9c13234d567b08d279f2df0f1ea14725d481b3a15329681c64f\"" Jun 20 18:27:52.985392 containerd[1884]: time="2025-06-20T18:27:52.985355063Z" level=info msg="StartContainer for \"70f68a813143c9c13234d567b08d279f2df0f1ea14725d481b3a15329681c64f\"" Jun 20 18:27:52.986807 containerd[1884]: time="2025-06-20T18:27:52.986778094Z" level=info msg="connecting to shim 70f68a813143c9c13234d567b08d279f2df0f1ea14725d481b3a15329681c64f" address="unix:///run/containerd/s/79b96b168e77756eae6b06062eae02b6e2a88cff8389e7b1850810937f11a6f3" protocol=ttrpc version=3 Jun 20 18:27:53.008417 systemd[1]: Started cri-containerd-70f68a813143c9c13234d567b08d279f2df0f1ea14725d481b3a15329681c64f.scope - libcontainer container 70f68a813143c9c13234d567b08d279f2df0f1ea14725d481b3a15329681c64f. Jun 20 18:27:53.054093 containerd[1884]: time="2025-06-20T18:27:53.053273379Z" level=info msg="StartContainer for \"70f68a813143c9c13234d567b08d279f2df0f1ea14725d481b3a15329681c64f\" returns successfully" Jun 20 18:27:53.189917 kubelet[3415]: I0620 18:27:53.189851 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-795c746798-n4wpx" podStartSLOduration=1.216944024 podStartE2EDuration="5.189837442s" podCreationTimestamp="2025-06-20 18:27:48 +0000 UTC" firstStartedPulling="2025-06-20 18:27:48.942901727 +0000 UTC m=+34.978357746" lastFinishedPulling="2025-06-20 18:27:52.915795145 +0000 UTC m=+38.951251164" observedRunningTime="2025-06-20 18:27:53.189736423 +0000 UTC m=+39.225192442" watchObservedRunningTime="2025-06-20 18:27:53.189837442 +0000 UTC m=+39.225293461" Jun 20 18:27:54.046337 containerd[1884]: time="2025-06-20T18:27:54.046203203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68657b97d-rppnt,Uid:8ff637cd-0f12-4574-89da-90b39dbb286e,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:54.217805 systemd-networkd[1485]: cali0afab43fa38: Link UP Jun 20 18:27:54.218525 systemd-networkd[1485]: cali0afab43fa38: Gained carrier Jun 20 18:27:54.236470 containerd[1884]: 2025-06-20 18:27:54.111 [INFO][4906] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0 calico-apiserver-68657b97d- calico-apiserver 8ff637cd-0f12-4574-89da-90b39dbb286e 864 0 2025-06-20 18:27:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68657b97d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef calico-apiserver-68657b97d-rppnt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0afab43fa38 [] [] }} ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-rppnt" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-" Jun 20 18:27:54.236470 containerd[1884]: 2025-06-20 18:27:54.111 [INFO][4906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-rppnt" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:27:54.236470 containerd[1884]: 2025-06-20 18:27:54.152 [INFO][4915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.153 [INFO][4915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"calico-apiserver-68657b97d-rppnt", "timestamp":"2025-06-20 18:27:54.152524672 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.153 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.154 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.154 [INFO][4915] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.163 [INFO][4915] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.173 [INFO][4915] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.184 [INFO][4915] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.186 [INFO][4915] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236616 containerd[1884]: 2025-06-20 18:27:54.188 [INFO][4915] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236756 containerd[1884]: 2025-06-20 18:27:54.188 [INFO][4915] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236756 containerd[1884]: 2025-06-20 18:27:54.190 [INFO][4915] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff Jun 20 18:27:54.236756 containerd[1884]: 2025-06-20 18:27:54.195 [INFO][4915] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236756 containerd[1884]: 2025-06-20 18:27:54.210 [INFO][4915] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.194/26] block=192.168.20.192/26 handle="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236756 containerd[1884]: 2025-06-20 18:27:54.210 [INFO][4915] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.194/26] handle="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:54.236756 containerd[1884]: 2025-06-20 18:27:54.210 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:54.236756 containerd[1884]: 2025-06-20 18:27:54.210 [INFO][4915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.194/26] IPv6=[] ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:27:54.236857 containerd[1884]: 2025-06-20 18:27:54.213 [INFO][4906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-rppnt" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0", GenerateName:"calico-apiserver-68657b97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ff637cd-0f12-4574-89da-90b39dbb286e", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68657b97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"calico-apiserver-68657b97d-rppnt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0afab43fa38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:54.236892 containerd[1884]: 2025-06-20 18:27:54.213 [INFO][4906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.194/32] ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-rppnt" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:27:54.236892 containerd[1884]: 2025-06-20 18:27:54.213 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0afab43fa38 ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-rppnt" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:27:54.236892 containerd[1884]: 2025-06-20 18:27:54.219 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-rppnt" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:27:54.236932 containerd[1884]: 2025-06-20 18:27:54.219 [INFO][4906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-rppnt" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0", GenerateName:"calico-apiserver-68657b97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ff637cd-0f12-4574-89da-90b39dbb286e", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68657b97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff", Pod:"calico-apiserver-68657b97d-rppnt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0afab43fa38", MAC:"1e:4b:1b:34:ae:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:54.236966 containerd[1884]: 2025-06-20 18:27:54.232 [INFO][4906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-rppnt" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:27:54.309348 containerd[1884]: time="2025-06-20T18:27:54.308063125Z" level=info msg="connecting to shim a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" address="unix:///run/containerd/s/ba4841c420a7b1ee508fd0c124f23cc1cdef16227f14115f2203b05ed84b980f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:54.334636 systemd[1]: Started cri-containerd-a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff.scope - libcontainer container a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff. Jun 20 18:27:54.382453 containerd[1884]: time="2025-06-20T18:27:54.382413654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68657b97d-rppnt,Uid:8ff637cd-0f12-4574-89da-90b39dbb286e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\"" Jun 20 18:27:54.384041 containerd[1884]: time="2025-06-20T18:27:54.383872622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 18:27:55.044247 containerd[1884]: time="2025-06-20T18:27:55.044091287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4c847979-l4jsd,Uid:96d87550-e618-4e0c-8509-559394b125f8,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:55.044247 containerd[1884]: time="2025-06-20T18:27:55.044142529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd7745df-fdl5q,Uid:6b7f6654-d1e6-40cd-9565-29e997b59a6c,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:55.044597 containerd[1884]: time="2025-06-20T18:27:55.044566191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-992sr,Uid:ec2b0ad4-db12-493d-94fb-15a7feb27fa3,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:55.226072 systemd-networkd[1485]: calibfc5a5b8928: Link UP Jun 20 18:27:55.226749 systemd-networkd[1485]: calibfc5a5b8928: Gained carrier Jun 20 18:27:55.261979 containerd[1884]: 2025-06-20 18:27:55.109 [INFO][4986] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0 calico-apiserver-b4c847979- calico-apiserver 96d87550-e618-4e0c-8509-559394b125f8 866 0 2025-06-20 18:27:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b4c847979 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef calico-apiserver-b4c847979-l4jsd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibfc5a5b8928 [] [] }} ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-l4jsd" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-" Jun 20 18:27:55.261979 containerd[1884]: 2025-06-20 18:27:55.109 [INFO][4986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-l4jsd" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" Jun 20 18:27:55.261979 containerd[1884]: 2025-06-20 18:27:55.145 [INFO][5021] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" HandleID="k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.146 [INFO][5021] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" HandleID="k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"calico-apiserver-b4c847979-l4jsd", "timestamp":"2025-06-20 18:27:55.145933784 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.146 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.146 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.146 [INFO][5021] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.158 [INFO][5021] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.173 [INFO][5021] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.182 [INFO][5021] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.184 [INFO][5021] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262413 containerd[1884]: 2025-06-20 18:27:55.185 [INFO][5021] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262555 containerd[1884]: 2025-06-20 18:27:55.185 [INFO][5021] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262555 containerd[1884]: 2025-06-20 18:27:55.187 [INFO][5021] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692 Jun 20 18:27:55.262555 containerd[1884]: 2025-06-20 18:27:55.193 [INFO][5021] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262555 containerd[1884]: 2025-06-20 18:27:55.217 [INFO][5021] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.195/26] block=192.168.20.192/26 handle="k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262555 containerd[1884]: 2025-06-20 18:27:55.217 [INFO][5021] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.195/26] handle="k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.262555 containerd[1884]: 2025-06-20 18:27:55.217 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:55.262555 containerd[1884]: 2025-06-20 18:27:55.217 [INFO][5021] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.195/26] IPv6=[] ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" HandleID="k8s-pod-network.b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" Jun 20 18:27:55.262944 containerd[1884]: 2025-06-20 18:27:55.222 [INFO][4986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-l4jsd" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0", GenerateName:"calico-apiserver-b4c847979-", Namespace:"calico-apiserver", SelfLink:"", UID:"96d87550-e618-4e0c-8509-559394b125f8", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4c847979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"calico-apiserver-b4c847979-l4jsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibfc5a5b8928", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:55.263023 containerd[1884]: 2025-06-20 18:27:55.222 [INFO][4986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.195/32] ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-l4jsd" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" Jun 20 18:27:55.263023 containerd[1884]: 2025-06-20 18:27:55.222 [INFO][4986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfc5a5b8928 ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-l4jsd" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" Jun 20 18:27:55.263023 containerd[1884]: 2025-06-20 18:27:55.226 [INFO][4986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-l4jsd" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" Jun 20 18:27:55.263070 containerd[1884]: 2025-06-20 18:27:55.227 [INFO][4986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-l4jsd" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0", GenerateName:"calico-apiserver-b4c847979-", Namespace:"calico-apiserver", SelfLink:"", UID:"96d87550-e618-4e0c-8509-559394b125f8", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4c847979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692", Pod:"calico-apiserver-b4c847979-l4jsd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibfc5a5b8928", MAC:"42:ca:90:33:7b:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:55.263106 containerd[1884]: 2025-06-20 18:27:55.260 [INFO][4986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-l4jsd" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--l4jsd-eth0" Jun 20 18:27:55.332843 systemd-networkd[1485]: calia31903399ae: Link UP Jun 20 18:27:55.333000 systemd-networkd[1485]: calia31903399ae: Gained carrier Jun 20 18:27:55.360582 containerd[1884]: time="2025-06-20T18:27:55.360462137Z" level=info msg="connecting to shim b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692" address="unix:///run/containerd/s/c53761db0eb33e9fb7db693d76011edd3c044cce993e3eedf7c0b73ebb9d422d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:55.371803 containerd[1884]: 2025-06-20 18:27:55.117 [INFO][4996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0 calico-kube-controllers-cd7745df- calico-system 6b7f6654-d1e6-40cd-9565-29e997b59a6c 862 0 2025-06-20 18:27:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd7745df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef calico-kube-controllers-cd7745df-fdl5q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia31903399ae [] [] }} ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Namespace="calico-system" Pod="calico-kube-controllers-cd7745df-fdl5q" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-" Jun 20 18:27:55.371803 containerd[1884]: 2025-06-20 18:27:55.117 [INFO][4996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Namespace="calico-system" Pod="calico-kube-controllers-cd7745df-fdl5q" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" Jun 20 18:27:55.371803 containerd[1884]: 2025-06-20 18:27:55.155 [INFO][5027] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" HandleID="k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.167 [INFO][5027] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" HandleID="k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"calico-kube-controllers-cd7745df-fdl5q", "timestamp":"2025-06-20 18:27:55.155971557 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.168 [INFO][5027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.217 [INFO][5027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.217 [INFO][5027] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.258 [INFO][5027] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.273 [INFO][5027] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.279 [INFO][5027] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.282 [INFO][5027] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372082 containerd[1884]: 2025-06-20 18:27:55.285 [INFO][5027] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372415 containerd[1884]: 2025-06-20 18:27:55.285 [INFO][5027] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372415 containerd[1884]: 2025-06-20 18:27:55.289 [INFO][5027] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198 Jun 20 18:27:55.372415 containerd[1884]: 2025-06-20 18:27:55.298 [INFO][5027] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372415 containerd[1884]: 2025-06-20 18:27:55.322 [INFO][5027] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.196/26] block=192.168.20.192/26 handle="k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372415 containerd[1884]: 2025-06-20 18:27:55.322 [INFO][5027] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.196/26] handle="k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.372415 containerd[1884]: 2025-06-20 18:27:55.324 [INFO][5027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:55.372415 containerd[1884]: 2025-06-20 18:27:55.324 [INFO][5027] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.196/26] IPv6=[] ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" HandleID="k8s-pod-network.1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" Jun 20 18:27:55.374116 containerd[1884]: 2025-06-20 18:27:55.327 [INFO][4996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Namespace="calico-system" Pod="calico-kube-controllers-cd7745df-fdl5q" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0", GenerateName:"calico-kube-controllers-cd7745df-", Namespace:"calico-system", SelfLink:"", UID:"6b7f6654-d1e6-40cd-9565-29e997b59a6c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd7745df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"calico-kube-controllers-cd7745df-fdl5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia31903399ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:55.374171 containerd[1884]: 2025-06-20 18:27:55.327 [INFO][4996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.196/32] ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Namespace="calico-system" Pod="calico-kube-controllers-cd7745df-fdl5q" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" Jun 20 18:27:55.374171 containerd[1884]: 2025-06-20 18:27:55.327 [INFO][4996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia31903399ae ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Namespace="calico-system" Pod="calico-kube-controllers-cd7745df-fdl5q" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" Jun 20 18:27:55.374171 containerd[1884]: 2025-06-20 18:27:55.332 [INFO][4996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Namespace="calico-system" Pod="calico-kube-controllers-cd7745df-fdl5q" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" Jun 20 18:27:55.374217 containerd[1884]: 2025-06-20 18:27:55.332 [INFO][4996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Namespace="calico-system" Pod="calico-kube-controllers-cd7745df-fdl5q" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0", GenerateName:"calico-kube-controllers-cd7745df-", Namespace:"calico-system", SelfLink:"", UID:"6b7f6654-d1e6-40cd-9565-29e997b59a6c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd7745df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198", Pod:"calico-kube-controllers-cd7745df-fdl5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia31903399ae", MAC:"06:3c:b4:49:a7:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:55.374249 containerd[1884]: 2025-06-20 18:27:55.365 [INFO][4996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" Namespace="calico-system" Pod="calico-kube-controllers-cd7745df-fdl5q" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--kube--controllers--cd7745df--fdl5q-eth0" Jun 20 18:27:55.412466 systemd[1]: Started cri-containerd-b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692.scope - libcontainer container b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692. Jun 20 18:27:55.455612 containerd[1884]: time="2025-06-20T18:27:55.455569370Z" level=info msg="connecting to shim 1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198" address="unix:///run/containerd/s/60b200b120d237d2389150262ea4ad59de9e56b8361f287b8b0a9076dabbf6e9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:55.478495 systemd[1]: Started cri-containerd-1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198.scope - libcontainer container 1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198. Jun 20 18:27:55.486956 systemd-networkd[1485]: cali537355d27e3: Link UP Jun 20 18:27:55.487652 systemd-networkd[1485]: cali537355d27e3: Gained carrier Jun 20 18:27:55.515142 containerd[1884]: 2025-06-20 18:27:55.136 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0 coredns-674b8bbfcf- kube-system ec2b0ad4-db12-493d-94fb-15a7feb27fa3 861 0 2025-06-20 18:27:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef coredns-674b8bbfcf-992sr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali537355d27e3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-992sr" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-" Jun 20 18:27:55.515142 containerd[1884]: 2025-06-20 18:27:55.139 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-992sr" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" Jun 20 18:27:55.515142 containerd[1884]: 2025-06-20 18:27:55.173 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" HandleID="k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Workload="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.174 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" HandleID="k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Workload="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"coredns-674b8bbfcf-992sr", "timestamp":"2025-06-20 18:27:55.173910512 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.174 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.323 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.323 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.375 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.389 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.399 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.403 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516398 containerd[1884]: 2025-06-20 18:27:55.406 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516567 containerd[1884]: 2025-06-20 18:27:55.406 [INFO][5035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516567 containerd[1884]: 2025-06-20 18:27:55.442 [INFO][5035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7 Jun 20 18:27:55.516567 containerd[1884]: 2025-06-20 18:27:55.458 [INFO][5035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516567 containerd[1884]: 2025-06-20 18:27:55.470 [INFO][5035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.197/26] block=192.168.20.192/26 handle="k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516567 containerd[1884]: 2025-06-20 18:27:55.470 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.197/26] handle="k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:55.516567 containerd[1884]: 2025-06-20 18:27:55.470 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:55.516567 containerd[1884]: 2025-06-20 18:27:55.470 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.197/26] IPv6=[] ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" HandleID="k8s-pod-network.0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Workload="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" Jun 20 18:27:55.516664 containerd[1884]: 2025-06-20 18:27:55.477 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-992sr" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ec2b0ad4-db12-493d-94fb-15a7feb27fa3", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"coredns-674b8bbfcf-992sr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali537355d27e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:55.516664 containerd[1884]: 2025-06-20 18:27:55.479 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.197/32] ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-992sr" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" Jun 20 18:27:55.516664 containerd[1884]: 2025-06-20 18:27:55.479 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali537355d27e3 ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-992sr" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" Jun 20 18:27:55.516664 containerd[1884]: 2025-06-20 18:27:55.487 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-992sr" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" Jun 20 18:27:55.516664 containerd[1884]: 2025-06-20 18:27:55.488 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-992sr" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ec2b0ad4-db12-493d-94fb-15a7feb27fa3", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7", Pod:"coredns-674b8bbfcf-992sr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali537355d27e3", MAC:"da:90:c7:96:24:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:55.516664 containerd[1884]: 2025-06-20 18:27:55.510 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" Namespace="kube-system" Pod="coredns-674b8bbfcf-992sr" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--992sr-eth0" Jun 20 18:27:55.560890 containerd[1884]: time="2025-06-20T18:27:55.560224312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd7745df-fdl5q,Uid:6b7f6654-d1e6-40cd-9565-29e997b59a6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198\"" Jun 20 18:27:55.580639 containerd[1884]: time="2025-06-20T18:27:55.580592691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4c847979-l4jsd,Uid:96d87550-e618-4e0c-8509-559394b125f8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692\"" Jun 20 18:27:55.601369 containerd[1884]: time="2025-06-20T18:27:55.600256567Z" level=info msg="connecting to shim 0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7" address="unix:///run/containerd/s/4b9ad9f07d5833fa90e6eec0fc71b9f55e009621f835e7c41d518b45fe582a03" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:55.619495 systemd[1]: Started cri-containerd-0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7.scope - libcontainer container 0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7. Jun 20 18:27:55.670584 containerd[1884]: time="2025-06-20T18:27:55.670483416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-992sr,Uid:ec2b0ad4-db12-493d-94fb-15a7feb27fa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7\"" Jun 20 18:27:55.680937 containerd[1884]: time="2025-06-20T18:27:55.680901737Z" level=info msg="CreateContainer within sandbox \"0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:27:55.729463 containerd[1884]: time="2025-06-20T18:27:55.729350319Z" level=info msg="Container 216eda850f662476ce329b586c815456630bf180ff8bd274f3dd5aa2beec6832: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:55.751844 containerd[1884]: time="2025-06-20T18:27:55.751800560Z" level=info msg="CreateContainer within sandbox \"0add49b330acb6c06368d4dd56158177b5d0c99e68f113ec4131b710da8d73f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"216eda850f662476ce329b586c815456630bf180ff8bd274f3dd5aa2beec6832\"" Jun 20 18:27:55.753539 containerd[1884]: time="2025-06-20T18:27:55.753500472Z" level=info msg="StartContainer for \"216eda850f662476ce329b586c815456630bf180ff8bd274f3dd5aa2beec6832\"" Jun 20 18:27:55.755517 containerd[1884]: time="2025-06-20T18:27:55.755480890Z" level=info msg="connecting to shim 216eda850f662476ce329b586c815456630bf180ff8bd274f3dd5aa2beec6832" address="unix:///run/containerd/s/4b9ad9f07d5833fa90e6eec0fc71b9f55e009621f835e7c41d518b45fe582a03" protocol=ttrpc version=3 Jun 20 18:27:55.790533 systemd[1]: Started cri-containerd-216eda850f662476ce329b586c815456630bf180ff8bd274f3dd5aa2beec6832.scope - libcontainer container 216eda850f662476ce329b586c815456630bf180ff8bd274f3dd5aa2beec6832. Jun 20 18:27:55.797501 systemd-networkd[1485]: cali0afab43fa38: Gained IPv6LL Jun 20 18:27:56.045164 containerd[1884]: time="2025-06-20T18:27:56.044523665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-l8dbl,Uid:8223c41e-22a0-464a-b9b5-2fc0bc637177,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:56.077495 containerd[1884]: time="2025-06-20T18:27:56.077458821Z" level=info msg="StartContainer for \"216eda850f662476ce329b586c815456630bf180ff8bd274f3dd5aa2beec6832\" returns successfully" Jun 20 18:27:56.248778 kubelet[3415]: I0620 18:27:56.248710 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-992sr" podStartSLOduration=37.248693794 podStartE2EDuration="37.248693794s" podCreationTimestamp="2025-06-20 18:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:56.22221686 +0000 UTC m=+42.257672879" watchObservedRunningTime="2025-06-20 18:27:56.248693794 +0000 UTC m=+42.284149813" Jun 20 18:27:56.391038 systemd-networkd[1485]: cali85739f34867: Link UP Jun 20 18:27:56.391361 systemd-networkd[1485]: cali85739f34867: Gained carrier Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.144 [INFO][5244] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0 goldmane-5bd85449d4- calico-system 8223c41e-22a0-464a-b9b5-2fc0bc637177 867 0 2025-06-20 18:27:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef goldmane-5bd85449d4-l8dbl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali85739f34867 [] [] }} ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Namespace="calico-system" Pod="goldmane-5bd85449d4-l8dbl" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.144 [INFO][5244] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Namespace="calico-system" Pod="goldmane-5bd85449d4-l8dbl" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.219 [INFO][5255] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" HandleID="k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Workload="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.219 [INFO][5255] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" HandleID="k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Workload="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ddb20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"goldmane-5bd85449d4-l8dbl", "timestamp":"2025-06-20 18:27:56.217171221 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.220 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.220 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.220 [INFO][5255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.273 [INFO][5255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.294 [INFO][5255] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.303 [INFO][5255] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.326 [INFO][5255] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.328 [INFO][5255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.328 [INFO][5255] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.330 [INFO][5255] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.340 [INFO][5255] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.357 [INFO][5255] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.198/26] block=192.168.20.192/26 handle="k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.357 [INFO][5255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.198/26] handle="k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.357 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:56.419599 containerd[1884]: 2025-06-20 18:27:56.357 [INFO][5255] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.198/26] IPv6=[] ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" HandleID="k8s-pod-network.e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Workload="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" Jun 20 18:27:56.420241 containerd[1884]: 2025-06-20 18:27:56.365 [INFO][5244] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Namespace="calico-system" Pod="goldmane-5bd85449d4-l8dbl" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"8223c41e-22a0-464a-b9b5-2fc0bc637177", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"goldmane-5bd85449d4-l8dbl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali85739f34867", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:56.420241 containerd[1884]: 2025-06-20 18:27:56.365 [INFO][5244] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.198/32] ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Namespace="calico-system" Pod="goldmane-5bd85449d4-l8dbl" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" Jun 20 18:27:56.420241 containerd[1884]: 2025-06-20 18:27:56.365 [INFO][5244] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85739f34867 ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Namespace="calico-system" Pod="goldmane-5bd85449d4-l8dbl" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" Jun 20 18:27:56.420241 containerd[1884]: 2025-06-20 18:27:56.390 [INFO][5244] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Namespace="calico-system" Pod="goldmane-5bd85449d4-l8dbl" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" Jun 20 18:27:56.420241 containerd[1884]: 2025-06-20 18:27:56.393 [INFO][5244] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Namespace="calico-system" Pod="goldmane-5bd85449d4-l8dbl" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"8223c41e-22a0-464a-b9b5-2fc0bc637177", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c", Pod:"goldmane-5bd85449d4-l8dbl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali85739f34867", MAC:"1e:09:61:46:2b:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:56.420241 containerd[1884]: 2025-06-20 18:27:56.413 [INFO][5244] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" Namespace="calico-system" Pod="goldmane-5bd85449d4-l8dbl" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-goldmane--5bd85449d4--l8dbl-eth0" Jun 20 18:27:56.501644 systemd-networkd[1485]: calibfc5a5b8928: Gained IPv6LL Jun 20 18:27:56.630508 systemd-networkd[1485]: cali537355d27e3: Gained IPv6LL Jun 20 18:27:56.757538 systemd-networkd[1485]: calia31903399ae: Gained IPv6LL Jun 20 18:27:56.784532 containerd[1884]: time="2025-06-20T18:27:56.783799215Z" level=info msg="connecting to shim e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c" address="unix:///run/containerd/s/6bcbf222e9f83039373ee68ee939b7aaed7d20f941e9a7cf739e4d6b122fed3e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:56.808468 systemd[1]: Started cri-containerd-e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c.scope - libcontainer container e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c. Jun 20 18:27:57.044422 containerd[1884]: time="2025-06-20T18:27:57.044268691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68657b97d-z7nsb,Uid:7ca80078-dd2c-46f4-a88b-90d011ac3ef4,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:27:57.044589 containerd[1884]: time="2025-06-20T18:27:57.044483875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5nnz8,Uid:0a9e252c-f052-4e18-abf4-a391b1d4aaf8,Namespace:calico-system,Attempt:0,}" Jun 20 18:27:57.415503 containerd[1884]: time="2025-06-20T18:27:57.415463175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-l8dbl,Uid:8223c41e-22a0-464a-b9b5-2fc0bc637177,Namespace:calico-system,Attempt:0,} returns sandbox id \"e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c\"" Jun 20 18:27:57.484632 containerd[1884]: time="2025-06-20T18:27:57.483424576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:57.487371 containerd[1884]: time="2025-06-20T18:27:57.487341601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=44514850" Jun 20 18:27:57.492537 containerd[1884]: time="2025-06-20T18:27:57.492512099Z" level=info msg="ImageCreate event name:\"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:57.501339 containerd[1884]: time="2025-06-20T18:27:57.501306620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:57.501952 containerd[1884]: time="2025-06-20T18:27:57.501836069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 3.11793323s" Jun 20 18:27:57.501952 containerd[1884]: time="2025-06-20T18:27:57.501865526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 20 18:27:57.503339 containerd[1884]: time="2025-06-20T18:27:57.503127144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 18:27:57.511042 containerd[1884]: time="2025-06-20T18:27:57.511008603Z" level=info msg="CreateContainer within sandbox \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:27:57.546994 containerd[1884]: time="2025-06-20T18:27:57.546941808Z" level=info msg="Container fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:57.550117 systemd-networkd[1485]: califb612e112ce: Link UP Jun 20 18:27:57.550636 systemd-networkd[1485]: califb612e112ce: Gained carrier Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.452 [INFO][5320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0 calico-apiserver-68657b97d- calico-apiserver 7ca80078-dd2c-46f4-a88b-90d011ac3ef4 863 0 2025-06-20 18:27:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68657b97d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef calico-apiserver-68657b97d-z7nsb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb612e112ce [] [] }} ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-z7nsb" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.453 [INFO][5320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-z7nsb" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.481 [INFO][5351] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.481 [INFO][5351] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"calico-apiserver-68657b97d-z7nsb", "timestamp":"2025-06-20 18:27:57.48146124 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.481 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.481 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.481 [INFO][5351] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.490 [INFO][5351] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.498 [INFO][5351] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.516 [INFO][5351] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.519 [INFO][5351] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.520 [INFO][5351] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.520 [INFO][5351] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.521 [INFO][5351] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3 Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.527 [INFO][5351] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.538 [INFO][5351] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.199/26] block=192.168.20.192/26 handle="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.542 [INFO][5351] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.199/26] handle="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.542 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:57.569263 containerd[1884]: 2025-06-20 18:27:57.542 [INFO][5351] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.199/26] IPv6=[] ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:27:57.569662 containerd[1884]: 2025-06-20 18:27:57.544 [INFO][5320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-z7nsb" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0", GenerateName:"calico-apiserver-68657b97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ca80078-dd2c-46f4-a88b-90d011ac3ef4", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68657b97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"calico-apiserver-68657b97d-z7nsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb612e112ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:57.569662 containerd[1884]: 2025-06-20 18:27:57.545 [INFO][5320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.199/32] ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-z7nsb" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:27:57.569662 containerd[1884]: 2025-06-20 18:27:57.545 [INFO][5320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb612e112ce ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-z7nsb" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:27:57.569662 containerd[1884]: 2025-06-20 18:27:57.550 [INFO][5320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-z7nsb" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:27:57.569662 containerd[1884]: 2025-06-20 18:27:57.553 [INFO][5320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-z7nsb" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0", GenerateName:"calico-apiserver-68657b97d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7ca80078-dd2c-46f4-a88b-90d011ac3ef4", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68657b97d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3", Pod:"calico-apiserver-68657b97d-z7nsb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb612e112ce", MAC:"62:63:40:20:f7:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:57.569662 containerd[1884]: 2025-06-20 18:27:57.566 [INFO][5320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Namespace="calico-apiserver" Pod="calico-apiserver-68657b97d-z7nsb" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:27:57.571992 containerd[1884]: time="2025-06-20T18:27:57.571856755Z" level=info msg="CreateContainer within sandbox \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\"" Jun 20 18:27:57.575489 containerd[1884]: time="2025-06-20T18:27:57.574130765Z" level=info msg="StartContainer for \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\"" Jun 20 18:27:57.575489 containerd[1884]: time="2025-06-20T18:27:57.574874558Z" level=info msg="connecting to shim fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c" address="unix:///run/containerd/s/ba4841c420a7b1ee508fd0c124f23cc1cdef16227f14115f2203b05ed84b980f" protocol=ttrpc version=3 Jun 20 18:27:57.593818 systemd[1]: Started cri-containerd-fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c.scope - libcontainer container fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c. Jun 20 18:27:57.625437 containerd[1884]: time="2025-06-20T18:27:57.625399658Z" level=info msg="connecting to shim 4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" address="unix:///run/containerd/s/705f3a158f8beab589b83443d32471b93b65d9de0fdfbb772bbe80e410c20020" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:57.650831 systemd[1]: Started cri-containerd-4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3.scope - libcontainer container 4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3. Jun 20 18:27:57.667883 systemd-networkd[1485]: calid64d2962521: Link UP Jun 20 18:27:57.671047 systemd-networkd[1485]: calid64d2962521: Gained carrier Jun 20 18:27:57.695824 containerd[1884]: time="2025-06-20T18:27:57.695763843Z" level=info msg="StartContainer for \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" returns successfully" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.473 [INFO][5330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0 csi-node-driver- calico-system 0a9e252c-f052-4e18-abf4-a391b1d4aaf8 748 0 2025-06-20 18:27:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef csi-node-driver-5nnz8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid64d2962521 [] [] }} ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Namespace="calico-system" Pod="csi-node-driver-5nnz8" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.474 [INFO][5330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Namespace="calico-system" Pod="csi-node-driver-5nnz8" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.497 [INFO][5358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" HandleID="k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Workload="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.497 [INFO][5358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" HandleID="k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Workload="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002550a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"csi-node-driver-5nnz8", "timestamp":"2025-06-20 18:27:57.497563985 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.497 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.542 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.542 [INFO][5358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.592 [INFO][5358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.598 [INFO][5358] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.605 [INFO][5358] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.608 [INFO][5358] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.612 [INFO][5358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.613 [INFO][5358] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.615 [INFO][5358] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9 Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.629 [INFO][5358] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.652 [INFO][5358] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.200/26] block=192.168.20.192/26 handle="k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.652 [INFO][5358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.200/26] handle="k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.652 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:57.698518 containerd[1884]: 2025-06-20 18:27:57.652 [INFO][5358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.200/26] IPv6=[] ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" HandleID="k8s-pod-network.409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Workload="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" Jun 20 18:27:57.699626 containerd[1884]: 2025-06-20 18:27:57.659 [INFO][5330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Namespace="calico-system" Pod="csi-node-driver-5nnz8" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a9e252c-f052-4e18-abf4-a391b1d4aaf8", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"csi-node-driver-5nnz8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid64d2962521", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:57.699626 containerd[1884]: 2025-06-20 18:27:57.661 [INFO][5330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.200/32] ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Namespace="calico-system" Pod="csi-node-driver-5nnz8" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" Jun 20 18:27:57.699626 containerd[1884]: 2025-06-20 18:27:57.661 [INFO][5330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid64d2962521 ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Namespace="calico-system" Pod="csi-node-driver-5nnz8" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" Jun 20 18:27:57.699626 containerd[1884]: 2025-06-20 18:27:57.671 [INFO][5330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Namespace="calico-system" Pod="csi-node-driver-5nnz8" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" Jun 20 18:27:57.699626 containerd[1884]: 2025-06-20 18:27:57.672 [INFO][5330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Namespace="calico-system" Pod="csi-node-driver-5nnz8" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a9e252c-f052-4e18-abf4-a391b1d4aaf8", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9", Pod:"csi-node-driver-5nnz8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid64d2962521", MAC:"d6:05:5d:39:6c:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:57.699626 containerd[1884]: 2025-06-20 18:27:57.691 [INFO][5330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" Namespace="calico-system" Pod="csi-node-driver-5nnz8" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-csi--node--driver--5nnz8-eth0" Jun 20 18:27:57.759491 containerd[1884]: time="2025-06-20T18:27:57.759445008Z" level=info msg="connecting to shim 409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9" address="unix:///run/containerd/s/b868e9508bf77fa94d5a0b1c19769ec2666c630ce93bfc7a66ce5809c376f60c" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:57.789481 systemd[1]: Started cri-containerd-409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9.scope - libcontainer container 409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9. Jun 20 18:27:57.821829 containerd[1884]: time="2025-06-20T18:27:57.821534272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68657b97d-z7nsb,Uid:7ca80078-dd2c-46f4-a88b-90d011ac3ef4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\"" Jun 20 18:27:57.834429 containerd[1884]: time="2025-06-20T18:27:57.834247242Z" level=info msg="CreateContainer within sandbox \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:27:57.848106 containerd[1884]: time="2025-06-20T18:27:57.848018287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5nnz8,Uid:0a9e252c-f052-4e18-abf4-a391b1d4aaf8,Namespace:calico-system,Attempt:0,} returns sandbox id \"409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9\"" Jun 20 18:27:57.886033 containerd[1884]: time="2025-06-20T18:27:57.885906244Z" level=info msg="Container 912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:57.908442 containerd[1884]: time="2025-06-20T18:27:57.908386727Z" level=info msg="CreateContainer within sandbox \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\"" Jun 20 18:27:57.909240 containerd[1884]: time="2025-06-20T18:27:57.909212122Z" level=info msg="StartContainer for \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\"" Jun 20 18:27:57.910990 containerd[1884]: time="2025-06-20T18:27:57.910960963Z" level=info msg="connecting to shim 912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479" address="unix:///run/containerd/s/705f3a158f8beab589b83443d32471b93b65d9de0fdfbb772bbe80e410c20020" protocol=ttrpc version=3 Jun 20 18:27:57.928459 systemd[1]: Started cri-containerd-912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479.scope - libcontainer container 912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479. Jun 20 18:27:57.968368 containerd[1884]: time="2025-06-20T18:27:57.968334753Z" level=info msg="StartContainer for \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" returns successfully" Jun 20 18:27:58.045813 containerd[1884]: time="2025-06-20T18:27:58.045776890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gdzhv,Uid:9b1d823a-0ec6-496a-9a96-cd9bacc490d2,Namespace:kube-system,Attempt:0,}" Jun 20 18:27:58.172702 systemd-networkd[1485]: cali489e9fc0aa6: Link UP Jun 20 18:27:58.174046 systemd-networkd[1485]: cali489e9fc0aa6: Gained carrier Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.092 [INFO][5544] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0 coredns-674b8bbfcf- kube-system 9b1d823a-0ec6-496a-9a96-cd9bacc490d2 860 0 2025-06-20 18:27:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef coredns-674b8bbfcf-gdzhv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali489e9fc0aa6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Namespace="kube-system" Pod="coredns-674b8bbfcf-gdzhv" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.092 [INFO][5544] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Namespace="kube-system" Pod="coredns-674b8bbfcf-gdzhv" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.128 [INFO][5555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" HandleID="k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Workload="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.128 [INFO][5555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" HandleID="k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Workload="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"coredns-674b8bbfcf-gdzhv", "timestamp":"2025-06-20 18:27:58.127383764 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.128 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.129 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.129 [INFO][5555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.135 [INFO][5555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.141 [INFO][5555] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.146 [INFO][5555] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.147 [INFO][5555] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.149 [INFO][5555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.149 [INFO][5555] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.150 [INFO][5555] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.155 [INFO][5555] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.163 [INFO][5555] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.201/26] block=192.168.20.192/26 handle="k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.164 [INFO][5555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.201/26] handle="k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.164 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:27:58.193411 containerd[1884]: 2025-06-20 18:27:58.164 [INFO][5555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.201/26] IPv6=[] ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" HandleID="k8s-pod-network.bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Workload="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" Jun 20 18:27:58.194689 containerd[1884]: 2025-06-20 18:27:58.168 [INFO][5544] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Namespace="kube-system" Pod="coredns-674b8bbfcf-gdzhv" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b1d823a-0ec6-496a-9a96-cd9bacc490d2", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"coredns-674b8bbfcf-gdzhv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali489e9fc0aa6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:58.194689 containerd[1884]: 2025-06-20 18:27:58.169 [INFO][5544] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.201/32] ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Namespace="kube-system" Pod="coredns-674b8bbfcf-gdzhv" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" Jun 20 18:27:58.194689 containerd[1884]: 2025-06-20 18:27:58.169 [INFO][5544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali489e9fc0aa6 ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Namespace="kube-system" Pod="coredns-674b8bbfcf-gdzhv" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" Jun 20 18:27:58.194689 containerd[1884]: 2025-06-20 18:27:58.175 [INFO][5544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Namespace="kube-system" Pod="coredns-674b8bbfcf-gdzhv" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" Jun 20 18:27:58.194689 containerd[1884]: 2025-06-20 18:27:58.176 [INFO][5544] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Namespace="kube-system" Pod="coredns-674b8bbfcf-gdzhv" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9b1d823a-0ec6-496a-9a96-cd9bacc490d2", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 27, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be", Pod:"coredns-674b8bbfcf-gdzhv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali489e9fc0aa6", MAC:"22:04:bd:3a:58:12", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:27:58.194689 containerd[1884]: 2025-06-20 18:27:58.190 [INFO][5544] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" Namespace="kube-system" Pod="coredns-674b8bbfcf-gdzhv" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-coredns--674b8bbfcf--gdzhv-eth0" Jun 20 18:27:58.222590 kubelet[3415]: I0620 18:27:58.222531 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68657b97d-z7nsb" podStartSLOduration=29.222516947 podStartE2EDuration="29.222516947s" podCreationTimestamp="2025-06-20 18:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:58.222077765 +0000 UTC m=+44.257533792" watchObservedRunningTime="2025-06-20 18:27:58.222516947 +0000 UTC m=+44.257972966" Jun 20 18:27:58.246091 kubelet[3415]: I0620 18:27:58.245849 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68657b97d-rppnt" podStartSLOduration=26.126310647 podStartE2EDuration="29.24578464s" podCreationTimestamp="2025-06-20 18:27:29 +0000 UTC" firstStartedPulling="2025-06-20 18:27:54.383517938 +0000 UTC m=+40.418973957" lastFinishedPulling="2025-06-20 18:27:57.502991931 +0000 UTC m=+43.538447950" observedRunningTime="2025-06-20 18:27:58.24499775 +0000 UTC m=+44.280453881" watchObservedRunningTime="2025-06-20 18:27:58.24578464 +0000 UTC m=+44.281240659" Jun 20 18:27:58.286135 containerd[1884]: time="2025-06-20T18:27:58.286091228Z" level=info msg="connecting to shim bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be" address="unix:///run/containerd/s/0f77f303603da5f8e67b515726bf3ae4ee5264fc69eb12fbbb0131008eec3f45" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:27:58.322441 systemd[1]: Started cri-containerd-bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be.scope - libcontainer container bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be. Jun 20 18:27:58.367721 containerd[1884]: time="2025-06-20T18:27:58.367619276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gdzhv,Uid:9b1d823a-0ec6-496a-9a96-cd9bacc490d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be\"" Jun 20 18:27:58.377731 containerd[1884]: time="2025-06-20T18:27:58.377648093Z" level=info msg="CreateContainer within sandbox \"bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:27:58.416056 containerd[1884]: time="2025-06-20T18:27:58.415766674Z" level=info msg="Container 1ef15534a9dd4126d83db511d193e2336ca0c8e3baa99e5fed61e5d83e9d262f: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:27:58.421429 systemd-networkd[1485]: cali85739f34867: Gained IPv6LL Jun 20 18:27:58.435716 containerd[1884]: time="2025-06-20T18:27:58.435676249Z" level=info msg="CreateContainer within sandbox \"bbe4447e04f04ad28ea7d26152ce545704f72df81be1fbf306d00ff5035632be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ef15534a9dd4126d83db511d193e2336ca0c8e3baa99e5fed61e5d83e9d262f\"" Jun 20 18:27:58.437313 containerd[1884]: time="2025-06-20T18:27:58.437051742Z" level=info msg="StartContainer for \"1ef15534a9dd4126d83db511d193e2336ca0c8e3baa99e5fed61e5d83e9d262f\"" Jun 20 18:27:58.438235 containerd[1884]: time="2025-06-20T18:27:58.438206884Z" level=info msg="connecting to shim 1ef15534a9dd4126d83db511d193e2336ca0c8e3baa99e5fed61e5d83e9d262f" address="unix:///run/containerd/s/0f77f303603da5f8e67b515726bf3ae4ee5264fc69eb12fbbb0131008eec3f45" protocol=ttrpc version=3 Jun 20 18:27:58.464438 systemd[1]: Started cri-containerd-1ef15534a9dd4126d83db511d193e2336ca0c8e3baa99e5fed61e5d83e9d262f.scope - libcontainer container 1ef15534a9dd4126d83db511d193e2336ca0c8e3baa99e5fed61e5d83e9d262f. Jun 20 18:27:58.500037 containerd[1884]: time="2025-06-20T18:27:58.499839165Z" level=info msg="StartContainer for \"1ef15534a9dd4126d83db511d193e2336ca0c8e3baa99e5fed61e5d83e9d262f\" returns successfully" Jun 20 18:27:58.614503 systemd-networkd[1485]: califb612e112ce: Gained IPv6LL Jun 20 18:27:58.742462 systemd-networkd[1485]: calid64d2962521: Gained IPv6LL Jun 20 18:27:59.218502 kubelet[3415]: I0620 18:27:59.218250 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:27:59.218750 kubelet[3415]: I0620 18:27:59.218649 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:27:59.301602 kubelet[3415]: I0620 18:27:59.301030 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gdzhv" podStartSLOduration=40.30101324 podStartE2EDuration="40.30101324s" podCreationTimestamp="2025-06-20 18:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:27:59.260090199 +0000 UTC m=+45.295546226" watchObservedRunningTime="2025-06-20 18:27:59.30101324 +0000 UTC m=+45.336469259" Jun 20 18:27:59.573454 systemd-networkd[1485]: cali489e9fc0aa6: Gained IPv6LL Jun 20 18:27:59.902988 containerd[1884]: time="2025-06-20T18:27:59.902759577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:59.912285 containerd[1884]: time="2025-06-20T18:27:59.912243609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=48129475" Jun 20 18:27:59.920210 containerd[1884]: time="2025-06-20T18:27:59.920167989Z" level=info msg="ImageCreate event name:\"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:59.926056 containerd[1884]: time="2025-06-20T18:27:59.926022854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:27:59.926435 containerd[1884]: time="2025-06-20T18:27:59.926400274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"49498684\" in 2.422069347s" Jun 20 18:27:59.926503 containerd[1884]: time="2025-06-20T18:27:59.926438491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:921fa1ccdd357b885fac8c560f5279f561d980cd3180686e3700e30e3d1fd28f\"" Jun 20 18:27:59.929125 containerd[1884]: time="2025-06-20T18:27:59.929062162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 18:27:59.948923 containerd[1884]: time="2025-06-20T18:27:59.948894221Z" level=info msg="CreateContainer within sandbox \"1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 18:27:59.988112 containerd[1884]: time="2025-06-20T18:27:59.987993298Z" level=info msg="Container f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:28:00.014851 containerd[1884]: time="2025-06-20T18:28:00.014721961Z" level=info msg="CreateContainer within sandbox \"1f7370b1562e6e69c38ffab613e844ebe2acfa6448b020276680654011a78198\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\"" Jun 20 18:28:00.016594 containerd[1884]: time="2025-06-20T18:28:00.016514652Z" level=info msg="StartContainer for \"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\"" Jun 20 18:28:00.018315 containerd[1884]: time="2025-06-20T18:28:00.018266653Z" level=info msg="connecting to shim f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79" address="unix:///run/containerd/s/60b200b120d237d2389150262ea4ad59de9e56b8361f287b8b0a9076dabbf6e9" protocol=ttrpc version=3 Jun 20 18:28:00.066577 systemd[1]: Started cri-containerd-f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79.scope - libcontainer container f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79. Jun 20 18:28:00.198902 containerd[1884]: time="2025-06-20T18:28:00.198743625Z" level=info msg="StartContainer for \"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" returns successfully" Jun 20 18:28:00.247284 kubelet[3415]: I0620 18:28:00.247221 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cd7745df-fdl5q" podStartSLOduration=23.883173184 podStartE2EDuration="28.24720777s" podCreationTimestamp="2025-06-20 18:27:32 +0000 UTC" firstStartedPulling="2025-06-20 18:27:55.564412819 +0000 UTC m=+41.599868838" lastFinishedPulling="2025-06-20 18:27:59.928447397 +0000 UTC m=+45.963903424" observedRunningTime="2025-06-20 18:28:00.246014474 +0000 UTC m=+46.281470493" watchObservedRunningTime="2025-06-20 18:28:00.24720777 +0000 UTC m=+46.282663789" Jun 20 18:28:00.314424 containerd[1884]: time="2025-06-20T18:28:00.314369881Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:00.318317 containerd[1884]: time="2025-06-20T18:28:00.318073883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 18:28:00.319605 containerd[1884]: time="2025-06-20T18:28:00.319577268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"45884107\" in 390.49025ms" Jun 20 18:28:00.319605 containerd[1884]: time="2025-06-20T18:28:00.319606405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:10b9b9e9d586aae9a4888055ea5a34c6abf5443f09529cfb9ca25ddf7670a490\"" Jun 20 18:28:00.322189 containerd[1884]: time="2025-06-20T18:28:00.322040133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 18:28:00.331113 containerd[1884]: time="2025-06-20T18:28:00.331082894Z" level=info msg="CreateContainer within sandbox \"b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:28:00.372784 containerd[1884]: time="2025-06-20T18:28:00.372641188Z" level=info msg="Container 5f5b08af86dbdd22aaa4c6795eb5dd18edafe9f35b0487f97f66531d245bd3a3: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:28:00.404275 containerd[1884]: time="2025-06-20T18:28:00.404236579Z" level=info msg="CreateContainer within sandbox \"b92ca99b415906d96a221fe6b96be04ede725e460e0a4e77abf3b5e886a2b692\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f5b08af86dbdd22aaa4c6795eb5dd18edafe9f35b0487f97f66531d245bd3a3\"" Jun 20 18:28:00.405311 containerd[1884]: time="2025-06-20T18:28:00.404887856Z" level=info msg="StartContainer for \"5f5b08af86dbdd22aaa4c6795eb5dd18edafe9f35b0487f97f66531d245bd3a3\"" Jun 20 18:28:00.406381 containerd[1884]: time="2025-06-20T18:28:00.406007957Z" level=info msg="connecting to shim 5f5b08af86dbdd22aaa4c6795eb5dd18edafe9f35b0487f97f66531d245bd3a3" address="unix:///run/containerd/s/c53761db0eb33e9fb7db693d76011edd3c044cce993e3eedf7c0b73ebb9d422d" protocol=ttrpc version=3 Jun 20 18:28:00.427442 systemd[1]: Started cri-containerd-5f5b08af86dbdd22aaa4c6795eb5dd18edafe9f35b0487f97f66531d245bd3a3.scope - libcontainer container 5f5b08af86dbdd22aaa4c6795eb5dd18edafe9f35b0487f97f66531d245bd3a3. Jun 20 18:28:00.489978 containerd[1884]: time="2025-06-20T18:28:00.489868649Z" level=info msg="StartContainer for \"5f5b08af86dbdd22aaa4c6795eb5dd18edafe9f35b0487f97f66531d245bd3a3\" returns successfully" Jun 20 18:28:01.228946 kubelet[3415]: I0620 18:28:01.228857 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:28:01.245146 kubelet[3415]: I0620 18:28:01.244462 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b4c847979-l4jsd" podStartSLOduration=26.505082412 podStartE2EDuration="31.244445377s" podCreationTimestamp="2025-06-20 18:27:30 +0000 UTC" firstStartedPulling="2025-06-20 18:27:55.58237831 +0000 UTC m=+41.617834329" lastFinishedPulling="2025-06-20 18:28:00.321741267 +0000 UTC m=+46.357197294" observedRunningTime="2025-06-20 18:28:01.243193679 +0000 UTC m=+47.278649698" watchObservedRunningTime="2025-06-20 18:28:01.244445377 +0000 UTC m=+47.279901396" Jun 20 18:28:02.230214 kubelet[3415]: I0620 18:28:02.230182 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:28:02.878414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3277234083.mount: Deactivated successfully. Jun 20 18:28:03.815433 containerd[1884]: time="2025-06-20T18:28:03.815371672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:03.823315 containerd[1884]: time="2025-06-20T18:28:03.822311284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=61832718" Jun 20 18:28:03.827541 containerd[1884]: time="2025-06-20T18:28:03.827505935Z" level=info msg="ImageCreate event name:\"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:03.835762 containerd[1884]: time="2025-06-20T18:28:03.835708652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:03.836603 containerd[1884]: time="2025-06-20T18:28:03.836570264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"61832564\" in 3.51450433s" Jun 20 18:28:03.836703 containerd[1884]: time="2025-06-20T18:28:03.836598753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:e153acb7e29a35b1e19436bff04be770e54b133613fb452f3729ecf7d5155407\"" Jun 20 18:28:03.837955 containerd[1884]: time="2025-06-20T18:28:03.837926701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 18:28:03.847748 containerd[1884]: time="2025-06-20T18:28:03.847621308Z" level=info msg="CreateContainer within sandbox \"e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 18:28:03.882691 containerd[1884]: time="2025-06-20T18:28:03.881320879Z" level=info msg="Container a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:28:03.906018 containerd[1884]: time="2025-06-20T18:28:03.905975034Z" level=info msg="CreateContainer within sandbox \"e292b8fadbab669b26b762c222661d2b279968fe5b5203608c6bf0f1de86ed4c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\"" Jun 20 18:28:03.906957 containerd[1884]: time="2025-06-20T18:28:03.906933705Z" level=info msg="StartContainer for \"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\"" Jun 20 18:28:03.907938 containerd[1884]: time="2025-06-20T18:28:03.907915073Z" level=info msg="connecting to shim a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04" address="unix:///run/containerd/s/6bcbf222e9f83039373ee68ee939b7aaed7d20f941e9a7cf739e4d6b122fed3e" protocol=ttrpc version=3 Jun 20 18:28:03.930406 systemd[1]: Started cri-containerd-a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04.scope - libcontainer container a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04. Jun 20 18:28:03.965579 containerd[1884]: time="2025-06-20T18:28:03.965500654Z" level=info msg="StartContainer for \"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" returns successfully" Jun 20 18:28:04.377130 containerd[1884]: time="2025-06-20T18:28:04.377084205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"eae63ae4332751fb115070b3bd3926b6dbf7a2302a848851aef9e847ae273db3\" pid:5804 exit_status:1 exited_at:{seconds:1750444084 nanos:371582112}" Jun 20 18:28:05.219894 containerd[1884]: time="2025-06-20T18:28:05.219403720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:05.224391 containerd[1884]: time="2025-06-20T18:28:05.224345059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8226240" Jun 20 18:28:05.231595 containerd[1884]: time="2025-06-20T18:28:05.230881305Z" level=info msg="ImageCreate event name:\"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:05.240745 containerd[1884]: time="2025-06-20T18:28:05.240714325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:05.241659 containerd[1884]: time="2025-06-20T18:28:05.241635499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"9595481\" in 1.403586946s" Jun 20 18:28:05.241784 containerd[1884]: time="2025-06-20T18:28:05.241768647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:7ed629178f937977285a4cbf7e979b6156a1d2d3b8db94117da3e21bc2209d69\"" Jun 20 18:28:05.250171 containerd[1884]: time="2025-06-20T18:28:05.250113114Z" level=info msg="CreateContainer within sandbox \"409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 18:28:05.292665 containerd[1884]: time="2025-06-20T18:28:05.292618014Z" level=info msg="Container f7d7ad4e0a2f2069f5380c756c67b6a6981b63323953cb056a01a9df991523d5: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:28:05.297486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142234052.mount: Deactivated successfully. Jun 20 18:28:05.329162 containerd[1884]: time="2025-06-20T18:28:05.329117363Z" level=info msg="CreateContainer within sandbox \"409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f7d7ad4e0a2f2069f5380c756c67b6a6981b63323953cb056a01a9df991523d5\"" Jun 20 18:28:05.331625 containerd[1884]: time="2025-06-20T18:28:05.330257824Z" level=info msg="StartContainer for \"f7d7ad4e0a2f2069f5380c756c67b6a6981b63323953cb056a01a9df991523d5\"" Jun 20 18:28:05.331625 containerd[1884]: time="2025-06-20T18:28:05.331252193Z" level=info msg="connecting to shim f7d7ad4e0a2f2069f5380c756c67b6a6981b63323953cb056a01a9df991523d5" address="unix:///run/containerd/s/b868e9508bf77fa94d5a0b1c19769ec2666c630ce93bfc7a66ce5809c376f60c" protocol=ttrpc version=3 Jun 20 18:28:05.361472 systemd[1]: Started cri-containerd-f7d7ad4e0a2f2069f5380c756c67b6a6981b63323953cb056a01a9df991523d5.scope - libcontainer container f7d7ad4e0a2f2069f5380c756c67b6a6981b63323953cb056a01a9df991523d5. Jun 20 18:28:05.382957 containerd[1884]: time="2025-06-20T18:28:05.382668542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"4f027ae724fd44e612bc44fd30a209e88a296b68ab0a37d03107e76e92361eb7\" pid:5833 exit_status:1 exited_at:{seconds:1750444085 nanos:381955967}" Jun 20 18:28:05.433382 containerd[1884]: time="2025-06-20T18:28:05.433345108Z" level=info msg="StartContainer for \"f7d7ad4e0a2f2069f5380c756c67b6a6981b63323953cb056a01a9df991523d5\" returns successfully" Jun 20 18:28:05.436643 containerd[1884]: time="2025-06-20T18:28:05.436614991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 18:28:06.313821 containerd[1884]: time="2025-06-20T18:28:06.313766610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"518ccc95a9adf3c148bc0dfb23977dfaf3efaa570d2bd20b63ce0d3d40f8e6cd\" pid:5887 exit_status:1 exited_at:{seconds:1750444086 nanos:313368397}" Jun 20 18:28:06.940186 containerd[1884]: time="2025-06-20T18:28:06.940115448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:06.944024 containerd[1884]: time="2025-06-20T18:28:06.943981287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=13749925" Jun 20 18:28:06.951050 containerd[1884]: time="2025-06-20T18:28:06.951005845Z" level=info msg="ImageCreate event name:\"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:06.958494 containerd[1884]: time="2025-06-20T18:28:06.958369558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:28:06.959390 containerd[1884]: time="2025-06-20T18:28:06.959362559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"15119118\" in 1.522594324s" Jun 20 18:28:06.959501 containerd[1884]: time="2025-06-20T18:28:06.959487467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:1e6e783be739df03247db08791a7feec05869cd9c6e8bb138bb599ca716b6018\"" Jun 20 18:28:06.970991 containerd[1884]: time="2025-06-20T18:28:06.970891689Z" level=info msg="CreateContainer within sandbox \"409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 18:28:07.012233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590344714.mount: Deactivated successfully. Jun 20 18:28:07.014348 containerd[1884]: time="2025-06-20T18:28:07.013440444Z" level=info msg="Container 721b933557deb37e417517fb42634c8430af030f744a5c8bebd3abcb086101e1: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:28:07.044157 containerd[1884]: time="2025-06-20T18:28:07.043941788Z" level=info msg="CreateContainer within sandbox \"409d279fc1a9e2e46b08d69d630be14d8aff7e9d49b9c529c9748e48dd7e39a9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"721b933557deb37e417517fb42634c8430af030f744a5c8bebd3abcb086101e1\"" Jun 20 18:28:07.047186 containerd[1884]: time="2025-06-20T18:28:07.047016648Z" level=info msg="StartContainer for \"721b933557deb37e417517fb42634c8430af030f744a5c8bebd3abcb086101e1\"" Jun 20 18:28:07.048682 containerd[1884]: time="2025-06-20T18:28:07.048658062Z" level=info msg="connecting to shim 721b933557deb37e417517fb42634c8430af030f744a5c8bebd3abcb086101e1" address="unix:///run/containerd/s/b868e9508bf77fa94d5a0b1c19769ec2666c630ce93bfc7a66ce5809c376f60c" protocol=ttrpc version=3 Jun 20 18:28:07.070434 systemd[1]: Started cri-containerd-721b933557deb37e417517fb42634c8430af030f744a5c8bebd3abcb086101e1.scope - libcontainer container 721b933557deb37e417517fb42634c8430af030f744a5c8bebd3abcb086101e1. Jun 20 18:28:07.104954 containerd[1884]: time="2025-06-20T18:28:07.104923619Z" level=info msg="StartContainer for \"721b933557deb37e417517fb42634c8430af030f744a5c8bebd3abcb086101e1\" returns successfully" Jun 20 18:28:07.133211 kubelet[3415]: I0620 18:28:07.133177 3415 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 18:28:07.135446 kubelet[3415]: I0620 18:28:07.135422 3415 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 18:28:07.309961 kubelet[3415]: I0620 18:28:07.309699 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-l8dbl" podStartSLOduration=29.888734039 podStartE2EDuration="36.309682075s" podCreationTimestamp="2025-06-20 18:27:31 +0000 UTC" firstStartedPulling="2025-06-20 18:27:57.416745161 +0000 UTC m=+43.452201180" lastFinishedPulling="2025-06-20 18:28:03.837693189 +0000 UTC m=+49.873149216" observedRunningTime="2025-06-20 18:28:04.288114729 +0000 UTC m=+50.323570748" watchObservedRunningTime="2025-06-20 18:28:07.309682075 +0000 UTC m=+53.345138102" Jun 20 18:28:07.310934 kubelet[3415]: I0620 18:28:07.310851 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5nnz8" podStartSLOduration=26.200360774 podStartE2EDuration="35.310841785s" podCreationTimestamp="2025-06-20 18:27:32 +0000 UTC" firstStartedPulling="2025-06-20 18:27:57.851214872 +0000 UTC m=+43.886670891" lastFinishedPulling="2025-06-20 18:28:06.961695883 +0000 UTC m=+52.997151902" observedRunningTime="2025-06-20 18:28:07.310496286 +0000 UTC m=+53.345952305" watchObservedRunningTime="2025-06-20 18:28:07.310841785 +0000 UTC m=+53.346297812" Jun 20 18:28:10.244053 kubelet[3415]: I0620 18:28:10.243728 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:28:10.291315 containerd[1884]: time="2025-06-20T18:28:10.291245683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"91c10dca05e46aead24e97f22974ab34751c2230942ce0be20295ab3b27f8e81\" pid:5957 exited_at:{seconds:1750444090 nanos:290963154}" Jun 20 18:28:10.337106 containerd[1884]: time="2025-06-20T18:28:10.336458046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"569827fc3dfed585638434848e679e2c23b7fc9da68a29796e7f55c258c668f8\" pid:5978 exited_at:{seconds:1750444090 nanos:336114642}" Jun 20 18:28:10.562372 kubelet[3415]: I0620 18:28:10.562077 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:28:10.642122 kubelet[3415]: I0620 18:28:10.642079 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:28:10.643802 containerd[1884]: time="2025-06-20T18:28:10.643766656Z" level=info msg="StopContainer for \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" with timeout 30 (s)" Jun 20 18:28:10.650065 containerd[1884]: time="2025-06-20T18:28:10.650028565Z" level=info msg="Stop container \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" with signal terminated" Jun 20 18:28:10.675223 systemd[1]: cri-containerd-912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479.scope: Deactivated successfully. Jun 20 18:28:10.689620 containerd[1884]: time="2025-06-20T18:28:10.689567574Z" level=info msg="received exit event container_id:\"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" id:\"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" pid:5521 exit_status:1 exited_at:{seconds:1750444090 nanos:688619302}" Jun 20 18:28:10.696114 containerd[1884]: time="2025-06-20T18:28:10.696061514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" id:\"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" pid:5521 exit_status:1 exited_at:{seconds:1750444090 nanos:688619302}" Jun 20 18:28:10.707960 systemd[1]: Created slice kubepods-besteffort-pod60c909ec_672d_4eea_b46f_d866bc98e5bc.slice - libcontainer container kubepods-besteffort-pod60c909ec_672d_4eea_b46f_d866bc98e5bc.slice. Jun 20 18:28:10.723218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479-rootfs.mount: Deactivated successfully. Jun 20 18:28:10.814502 kubelet[3415]: I0620 18:28:10.814385 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/60c909ec-672d-4eea-b46f-d866bc98e5bc-calico-apiserver-certs\") pod \"calico-apiserver-b4c847979-zdk5l\" (UID: \"60c909ec-672d-4eea-b46f-d866bc98e5bc\") " pod="calico-apiserver/calico-apiserver-b4c847979-zdk5l" Jun 20 18:28:10.814502 kubelet[3415]: I0620 18:28:10.814427 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp6zv\" (UniqueName: \"kubernetes.io/projected/60c909ec-672d-4eea-b46f-d866bc98e5bc-kube-api-access-bp6zv\") pod \"calico-apiserver-b4c847979-zdk5l\" (UID: \"60c909ec-672d-4eea-b46f-d866bc98e5bc\") " pod="calico-apiserver/calico-apiserver-b4c847979-zdk5l" Jun 20 18:28:11.012754 containerd[1884]: time="2025-06-20T18:28:11.012696015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4c847979-zdk5l,Uid:60c909ec-672d-4eea-b46f-d866bc98e5bc,Namespace:calico-apiserver,Attempt:0,}" Jun 20 18:28:11.504580 containerd[1884]: time="2025-06-20T18:28:11.504447952Z" level=info msg="StopContainer for \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" returns successfully" Jun 20 18:28:11.505774 containerd[1884]: time="2025-06-20T18:28:11.505470977Z" level=info msg="StopPodSandbox for \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\"" Jun 20 18:28:11.505774 containerd[1884]: time="2025-06-20T18:28:11.505527115Z" level=info msg="Container to stop \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:28:11.519180 systemd[1]: cri-containerd-4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3.scope: Deactivated successfully. Jun 20 18:28:11.527860 containerd[1884]: time="2025-06-20T18:28:11.527825838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" id:\"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" pid:5427 exit_status:137 exited_at:{seconds:1750444091 nanos:521776712}" Jun 20 18:28:11.569722 containerd[1884]: time="2025-06-20T18:28:11.565935015Z" level=info msg="shim disconnected" id=4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3 namespace=k8s.io Jun 20 18:28:11.569722 containerd[1884]: time="2025-06-20T18:28:11.566037243Z" level=warning msg="cleaning up after shim disconnected" id=4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3 namespace=k8s.io Jun 20 18:28:11.569722 containerd[1884]: time="2025-06-20T18:28:11.566069996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:28:11.568736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3-rootfs.mount: Deactivated successfully. Jun 20 18:28:11.608212 systemd-networkd[1485]: cali8518db277ac: Link UP Jun 20 18:28:11.609691 systemd-networkd[1485]: cali8518db277ac: Gained carrier Jun 20 18:28:11.633217 containerd[1884]: time="2025-06-20T18:28:11.633085489Z" level=info msg="received exit event sandbox_id:\"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" exit_status:137 exited_at:{seconds:1750444091 nanos:521776712}" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.507 [INFO][6013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0 calico-apiserver-b4c847979- calico-apiserver 60c909ec-672d-4eea-b46f-d866bc98e5bc 1126 0 2025-06-20 18:28:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b4c847979 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.0-a-442b0d77ef calico-apiserver-b4c847979-zdk5l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8518db277ac [] [] }} ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-zdk5l" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.507 [INFO][6013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-zdk5l" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.546 [INFO][6030] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" HandleID="k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.547 [INFO][6030] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" HandleID="k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.0-a-442b0d77ef", "pod":"calico-apiserver-b4c847979-zdk5l", "timestamp":"2025-06-20 18:28:11.546759835 +0000 UTC"}, Hostname:"ci-4344.1.0-a-442b0d77ef", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.547 [INFO][6030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.547 [INFO][6030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.547 [INFO][6030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.0-a-442b0d77ef' Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.554 [INFO][6030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.558 [INFO][6030] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.570 [INFO][6030] ipam/ipam.go 511: Trying affinity for 192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.571 [INFO][6030] ipam/ipam.go 158: Attempting to load block cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.573 [INFO][6030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.573 [INFO][6030] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.576 [INFO][6030] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.586 [INFO][6030] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.599 [INFO][6030] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.20.202/26] block=192.168.20.192/26 handle="k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.600 [INFO][6030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.20.202/26] handle="k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" host="ci-4344.1.0-a-442b0d77ef" Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.600 [INFO][6030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:28:11.635896 containerd[1884]: 2025-06-20 18:28:11.600 [INFO][6030] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.202/26] IPv6=[] ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" HandleID="k8s-pod-network.9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" Jun 20 18:28:11.636263 containerd[1884]: 2025-06-20 18:28:11.603 [INFO][6013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-zdk5l" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0", GenerateName:"calico-apiserver-b4c847979-", Namespace:"calico-apiserver", SelfLink:"", UID:"60c909ec-672d-4eea-b46f-d866bc98e5bc", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 28, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4c847979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"", Pod:"calico-apiserver-b4c847979-zdk5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8518db277ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:28:11.636263 containerd[1884]: 2025-06-20 18:28:11.603 [INFO][6013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.202/32] ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-zdk5l" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" Jun 20 18:28:11.636263 containerd[1884]: 2025-06-20 18:28:11.603 [INFO][6013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8518db277ac ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-zdk5l" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" Jun 20 18:28:11.636263 containerd[1884]: 2025-06-20 18:28:11.611 [INFO][6013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-zdk5l" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" Jun 20 18:28:11.636263 containerd[1884]: 2025-06-20 18:28:11.612 [INFO][6013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-zdk5l" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0", GenerateName:"calico-apiserver-b4c847979-", Namespace:"calico-apiserver", SelfLink:"", UID:"60c909ec-672d-4eea-b46f-d866bc98e5bc", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 18, 28, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b4c847979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.0-a-442b0d77ef", ContainerID:"9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d", Pod:"calico-apiserver-b4c847979-zdk5l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8518db277ac", MAC:"76:76:92:23:30:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 18:28:11.636263 containerd[1884]: 2025-06-20 18:28:11.631 [INFO][6013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" Namespace="calico-apiserver" Pod="calico-apiserver-b4c847979-zdk5l" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--b4c847979--zdk5l-eth0" Jun 20 18:28:11.697460 containerd[1884]: time="2025-06-20T18:28:11.697418254Z" level=info msg="connecting to shim 9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d" address="unix:///run/containerd/s/1c884596b517ff79a527592fecedcdcf3f677cf8a480b3fcc47e0a56d698f678" namespace=k8s.io protocol=ttrpc version=3 Jun 20 18:28:11.707485 systemd-networkd[1485]: califb612e112ce: Link DOWN Jun 20 18:28:11.707491 systemd-networkd[1485]: califb612e112ce: Lost carrier Jun 20 18:28:11.728328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3-shm.mount: Deactivated successfully. Jun 20 18:28:11.748448 systemd[1]: Started cri-containerd-9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d.scope - libcontainer container 9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d. Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.703 [INFO][6085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.703 [INFO][6085] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" iface="eth0" netns="/var/run/netns/cni-3ddf2a75-710b-36de-9182-c9934e4e1343" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.706 [INFO][6085] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" iface="eth0" netns="/var/run/netns/cni-3ddf2a75-710b-36de-9182-c9934e4e1343" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.715 [INFO][6085] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" after=11.594796ms iface="eth0" netns="/var/run/netns/cni-3ddf2a75-710b-36de-9182-c9934e4e1343" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.715 [INFO][6085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.715 [INFO][6085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.759 [INFO][6122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.760 [INFO][6122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.760 [INFO][6122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.836 [INFO][6122] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.837 [INFO][6122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.841 [INFO][6122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:28:11.848080 containerd[1884]: 2025-06-20 18:28:11.843 [INFO][6085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:11.850641 containerd[1884]: time="2025-06-20T18:28:11.850448439Z" level=info msg="TearDown network for sandbox \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" successfully" Jun 20 18:28:11.850641 containerd[1884]: time="2025-06-20T18:28:11.850475824Z" level=info msg="StopPodSandbox for \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" returns successfully" Jun 20 18:28:11.852197 systemd[1]: run-netns-cni\x2d3ddf2a75\x2d710b\x2d36de\x2d9182\x2dc9934e4e1343.mount: Deactivated successfully. Jun 20 18:28:11.863768 containerd[1884]: time="2025-06-20T18:28:11.863664888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b4c847979-zdk5l,Uid:60c909ec-672d-4eea-b46f-d866bc98e5bc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d\"" Jun 20 18:28:11.873897 containerd[1884]: time="2025-06-20T18:28:11.873872487Z" level=info msg="CreateContainer within sandbox \"9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 18:28:11.911521 containerd[1884]: time="2025-06-20T18:28:11.910972887Z" level=info msg="Container e4a5c1858955fbfc2fe6c2592a8844f607566cf61c2ebb7a7d2c713fc811127f: CDI devices from CRI Config.CDIDevices: []" Jun 20 18:28:11.926483 kubelet[3415]: I0620 18:28:11.926450 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ldnm\" (UniqueName: \"kubernetes.io/projected/7ca80078-dd2c-46f4-a88b-90d011ac3ef4-kube-api-access-9ldnm\") pod \"7ca80078-dd2c-46f4-a88b-90d011ac3ef4\" (UID: \"7ca80078-dd2c-46f4-a88b-90d011ac3ef4\") " Jun 20 18:28:11.930862 systemd[1]: var-lib-kubelet-pods-7ca80078\x2ddd2c\x2d46f4\x2da88b\x2d90d011ac3ef4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ldnm.mount: Deactivated successfully. Jun 20 18:28:11.932469 kubelet[3415]: I0620 18:28:11.932448 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ca80078-dd2c-46f4-a88b-90d011ac3ef4-calico-apiserver-certs\") pod \"7ca80078-dd2c-46f4-a88b-90d011ac3ef4\" (UID: \"7ca80078-dd2c-46f4-a88b-90d011ac3ef4\") " Jun 20 18:28:11.935434 kubelet[3415]: I0620 18:28:11.935402 3415 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca80078-dd2c-46f4-a88b-90d011ac3ef4-kube-api-access-9ldnm" (OuterVolumeSpecName: "kube-api-access-9ldnm") pod "7ca80078-dd2c-46f4-a88b-90d011ac3ef4" (UID: "7ca80078-dd2c-46f4-a88b-90d011ac3ef4"). InnerVolumeSpecName "kube-api-access-9ldnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:28:11.937257 kubelet[3415]: I0620 18:28:11.937223 3415 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca80078-dd2c-46f4-a88b-90d011ac3ef4-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7ca80078-dd2c-46f4-a88b-90d011ac3ef4" (UID: "7ca80078-dd2c-46f4-a88b-90d011ac3ef4"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:28:11.937864 systemd[1]: var-lib-kubelet-pods-7ca80078\x2ddd2c\x2d46f4\x2da88b\x2d90d011ac3ef4-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 18:28:11.940373 containerd[1884]: time="2025-06-20T18:28:11.940340346Z" level=info msg="CreateContainer within sandbox \"9c635526bb9cd67609933124cce162a461ae3872121539636b3ce5eee0e4211d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e4a5c1858955fbfc2fe6c2592a8844f607566cf61c2ebb7a7d2c713fc811127f\"" Jun 20 18:28:11.941005 containerd[1884]: time="2025-06-20T18:28:11.940966414Z" level=info msg="StartContainer for \"e4a5c1858955fbfc2fe6c2592a8844f607566cf61c2ebb7a7d2c713fc811127f\"" Jun 20 18:28:11.942389 containerd[1884]: time="2025-06-20T18:28:11.942345563Z" level=info msg="connecting to shim e4a5c1858955fbfc2fe6c2592a8844f607566cf61c2ebb7a7d2c713fc811127f" address="unix:///run/containerd/s/1c884596b517ff79a527592fecedcdcf3f677cf8a480b3fcc47e0a56d698f678" protocol=ttrpc version=3 Jun 20 18:28:11.956505 systemd[1]: Started cri-containerd-e4a5c1858955fbfc2fe6c2592a8844f607566cf61c2ebb7a7d2c713fc811127f.scope - libcontainer container e4a5c1858955fbfc2fe6c2592a8844f607566cf61c2ebb7a7d2c713fc811127f. Jun 20 18:28:12.000894 containerd[1884]: time="2025-06-20T18:28:12.000706220Z" level=info msg="StartContainer for \"e4a5c1858955fbfc2fe6c2592a8844f607566cf61c2ebb7a7d2c713fc811127f\" returns successfully" Jun 20 18:28:12.033563 kubelet[3415]: I0620 18:28:12.033509 3415 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ldnm\" (UniqueName: \"kubernetes.io/projected/7ca80078-dd2c-46f4-a88b-90d011ac3ef4-kube-api-access-9ldnm\") on node \"ci-4344.1.0-a-442b0d77ef\" DevicePath \"\"" Jun 20 18:28:12.033563 kubelet[3415]: I0620 18:28:12.033544 3415 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7ca80078-dd2c-46f4-a88b-90d011ac3ef4-calico-apiserver-certs\") on node \"ci-4344.1.0-a-442b0d77ef\" DevicePath \"\"" Jun 20 18:28:12.049092 systemd[1]: Removed slice kubepods-besteffort-pod7ca80078_dd2c_46f4_a88b_90d011ac3ef4.slice - libcontainer container kubepods-besteffort-pod7ca80078_dd2c_46f4_a88b_90d011ac3ef4.slice. Jun 20 18:28:12.284333 kubelet[3415]: I0620 18:28:12.284084 3415 scope.go:117] "RemoveContainer" containerID="912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479" Jun 20 18:28:12.290299 containerd[1884]: time="2025-06-20T18:28:12.290257377Z" level=info msg="RemoveContainer for \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\"" Jun 20 18:28:12.326809 containerd[1884]: time="2025-06-20T18:28:12.326766030Z" level=info msg="RemoveContainer for \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" returns successfully" Jun 20 18:28:12.330937 kubelet[3415]: I0620 18:28:12.330908 3415 scope.go:117] "RemoveContainer" containerID="912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479" Jun 20 18:28:12.331794 containerd[1884]: time="2025-06-20T18:28:12.331345436Z" level=error msg="ContainerStatus for \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\": not found" Jun 20 18:28:12.332075 kubelet[3415]: E0620 18:28:12.332052 3415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\": not found" containerID="912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479" Jun 20 18:28:12.332132 kubelet[3415]: I0620 18:28:12.332079 3415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479"} err="failed to get container status \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\": rpc error: code = NotFound desc = an error occurred when try to find container \"912f3d542958f4a4ffbe51e118e52226a1524c409474bf7f588f32ffacf26479\": not found" Jun 20 18:28:13.013471 systemd-networkd[1485]: cali8518db277ac: Gained IPv6LL Jun 20 18:28:13.298550 kubelet[3415]: I0620 18:28:13.298443 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:28:13.585041 containerd[1884]: time="2025-06-20T18:28:13.584976183Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1750444091 nanos:521776712}" Jun 20 18:28:14.047965 kubelet[3415]: I0620 18:28:14.047459 3415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca80078-dd2c-46f4-a88b-90d011ac3ef4" path="/var/lib/kubelet/pods/7ca80078-dd2c-46f4-a88b-90d011ac3ef4/volumes" Jun 20 18:28:14.049862 containerd[1884]: time="2025-06-20T18:28:14.048473438Z" level=info msg="StopPodSandbox for \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\"" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.074 [WARNING][6205] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.074 [INFO][6205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.074 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" iface="eth0" netns="" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.074 [INFO][6205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.074 [INFO][6205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.088 [INFO][6212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.088 [INFO][6212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.088 [INFO][6212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.117 [WARNING][6212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.117 [INFO][6212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.119 [INFO][6212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:28:14.124496 containerd[1884]: 2025-06-20 18:28:14.122 [INFO][6205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:14.124968 containerd[1884]: time="2025-06-20T18:28:14.124851833Z" level=info msg="TearDown network for sandbox \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" successfully" Jun 20 18:28:14.124968 containerd[1884]: time="2025-06-20T18:28:14.124891618Z" level=info msg="StopPodSandbox for \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" returns successfully" Jun 20 18:28:14.125593 containerd[1884]: time="2025-06-20T18:28:14.125487814Z" level=info msg="RemovePodSandbox for \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\"" Jun 20 18:28:14.125593 containerd[1884]: time="2025-06-20T18:28:14.125541143Z" level=info msg="Forcibly stopping sandbox \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\"" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.153 [WARNING][6226] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.153 [INFO][6226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.153 [INFO][6226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" iface="eth0" netns="" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.153 [INFO][6226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.154 [INFO][6226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.168 [INFO][6233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.169 [INFO][6233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.169 [INFO][6233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.173 [WARNING][6233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.173 [INFO][6233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" HandleID="k8s-pod-network.4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--z7nsb-eth0" Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.175 [INFO][6233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:28:14.177608 containerd[1884]: 2025-06-20 18:28:14.176 [INFO][6226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3" Jun 20 18:28:14.178321 containerd[1884]: time="2025-06-20T18:28:14.177983664Z" level=info msg="TearDown network for sandbox \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" successfully" Jun 20 18:28:14.179520 containerd[1884]: time="2025-06-20T18:28:14.179487465Z" level=info msg="Ensure that sandbox 4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3 in task-service has been cleanup successfully" Jun 20 18:28:14.192172 containerd[1884]: time="2025-06-20T18:28:14.192141797Z" level=info msg="RemovePodSandbox \"4dfb2f0889f0c8aaa26400eaefc468a6082bcd1490398a4f663565f6d72616c3\" returns successfully" Jun 20 18:28:16.428607 kubelet[3415]: I0620 18:28:16.428204 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:28:16.484269 kubelet[3415]: I0620 18:28:16.484220 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b4c847979-zdk5l" podStartSLOduration=6.484203632 podStartE2EDuration="6.484203632s" podCreationTimestamp="2025-06-20 18:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:28:12.333306356 +0000 UTC m=+58.368762375" watchObservedRunningTime="2025-06-20 18:28:16.484203632 +0000 UTC m=+62.519659651" Jun 20 18:28:21.893355 containerd[1884]: time="2025-06-20T18:28:21.893258776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\" id:\"bce7368b40849a260a5c17bd07ef0c4aa604c848bc3cb168e935efc171614286\" pid:6260 exited_at:{seconds:1750444101 nanos:892922150}" Jun 20 18:28:23.326808 update_engine[1874]: I20250620 18:28:23.326333 1874 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 18:28:23.326808 update_engine[1874]: I20250620 18:28:23.326378 1874 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 18:28:23.326808 update_engine[1874]: I20250620 18:28:23.326582 1874 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 18:28:23.327175 update_engine[1874]: I20250620 18:28:23.327020 1874 omaha_request_params.cc:62] Current group set to beta Jun 20 18:28:23.327730 update_engine[1874]: I20250620 18:28:23.327580 1874 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 18:28:23.327730 update_engine[1874]: I20250620 18:28:23.327596 1874 update_attempter.cc:643] Scheduling an action processor start. Jun 20 18:28:23.327730 update_engine[1874]: I20250620 18:28:23.327613 1874 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:28:23.328170 update_engine[1874]: I20250620 18:28:23.328132 1874 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 18:28:23.328426 update_engine[1874]: I20250620 18:28:23.328279 1874 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:28:23.328426 update_engine[1874]: I20250620 18:28:23.328320 1874 omaha_request_action.cc:272] Request: Jun 20 18:28:23.328426 update_engine[1874]: Jun 20 18:28:23.328426 update_engine[1874]: Jun 20 18:28:23.328426 update_engine[1874]: Jun 20 18:28:23.328426 update_engine[1874]: Jun 20 18:28:23.328426 update_engine[1874]: Jun 20 18:28:23.328426 update_engine[1874]: Jun 20 18:28:23.328426 update_engine[1874]: Jun 20 18:28:23.328426 update_engine[1874]: Jun 20 18:28:23.328426 update_engine[1874]: I20250620 18:28:23.328328 1874 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:28:23.334968 update_engine[1874]: I20250620 18:28:23.333691 1874 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:28:23.334968 update_engine[1874]: I20250620 18:28:23.334008 1874 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:28:23.335122 locksmithd[1983]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 18:28:23.464376 update_engine[1874]: E20250620 18:28:23.464314 1874 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:28:23.464523 update_engine[1874]: I20250620 18:28:23.464418 1874 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 18:28:26.743683 kubelet[3415]: I0620 18:28:26.743581 3415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:28:26.826380 containerd[1884]: time="2025-06-20T18:28:26.826301574Z" level=info msg="StopContainer for \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" with timeout 30 (s)" Jun 20 18:28:26.827122 containerd[1884]: time="2025-06-20T18:28:26.827100575Z" level=info msg="Stop container \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" with signal terminated" Jun 20 18:28:26.849257 systemd[1]: cri-containerd-fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c.scope: Deactivated successfully. Jun 20 18:28:26.852073 containerd[1884]: time="2025-06-20T18:28:26.852035988Z" level=info msg="received exit event container_id:\"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" id:\"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" pid:5383 exit_status:1 exited_at:{seconds:1750444106 nanos:851690209}" Jun 20 18:28:26.853314 containerd[1884]: time="2025-06-20T18:28:26.852403824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" id:\"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" pid:5383 exit_status:1 exited_at:{seconds:1750444106 nanos:851690209}" Jun 20 18:28:26.874468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c-rootfs.mount: Deactivated successfully. Jun 20 18:28:26.953987 containerd[1884]: time="2025-06-20T18:28:26.953946300Z" level=info msg="StopContainer for \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" returns successfully" Jun 20 18:28:26.954621 containerd[1884]: time="2025-06-20T18:28:26.954581048Z" level=info msg="StopPodSandbox for \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\"" Jun 20 18:28:26.954684 containerd[1884]: time="2025-06-20T18:28:26.954650962Z" level=info msg="Container to stop \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:28:26.959797 systemd[1]: cri-containerd-a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff.scope: Deactivated successfully. Jun 20 18:28:26.962051 containerd[1884]: time="2025-06-20T18:28:26.962011473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" id:\"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" pid:4968 exit_status:137 exited_at:{seconds:1750444106 nanos:960862573}" Jun 20 18:28:26.981506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff-rootfs.mount: Deactivated successfully. Jun 20 18:28:26.983093 containerd[1884]: time="2025-06-20T18:28:26.982419704Z" level=info msg="shim disconnected" id=a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff namespace=k8s.io Jun 20 18:28:26.983093 containerd[1884]: time="2025-06-20T18:28:26.982448113Z" level=warning msg="cleaning up after shim disconnected" id=a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff namespace=k8s.io Jun 20 18:28:26.983093 containerd[1884]: time="2025-06-20T18:28:26.982472234Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:28:27.013710 containerd[1884]: time="2025-06-20T18:28:27.011763416Z" level=info msg="received exit event sandbox_id:\"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" exit_status:137 exited_at:{seconds:1750444106 nanos:960862573}" Jun 20 18:28:27.013752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff-shm.mount: Deactivated successfully. Jun 20 18:28:27.058764 systemd-networkd[1485]: cali0afab43fa38: Link DOWN Jun 20 18:28:27.059267 systemd-networkd[1485]: cali0afab43fa38: Lost carrier Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.055 [INFO][6348] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.056 [INFO][6348] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" iface="eth0" netns="/var/run/netns/cni-c9fd952e-8e4f-dd4e-05b7-a9b98e2d0b58" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.056 [INFO][6348] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" iface="eth0" netns="/var/run/netns/cni-c9fd952e-8e4f-dd4e-05b7-a9b98e2d0b58" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.063 [INFO][6348] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" after=7.546061ms iface="eth0" netns="/var/run/netns/cni-c9fd952e-8e4f-dd4e-05b7-a9b98e2d0b58" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.063 [INFO][6348] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.063 [INFO][6348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.082 [INFO][6357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.083 [INFO][6357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.083 [INFO][6357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.118 [INFO][6357] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.118 [INFO][6357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.119 [INFO][6357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:28:27.122697 containerd[1884]: 2025-06-20 18:28:27.121 [INFO][6348] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:28:27.123765 containerd[1884]: time="2025-06-20T18:28:27.123202882Z" level=info msg="TearDown network for sandbox \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" successfully" Jun 20 18:28:27.123765 containerd[1884]: time="2025-06-20T18:28:27.123233091Z" level=info msg="StopPodSandbox for \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" returns successfully" Jun 20 18:28:27.125632 systemd[1]: run-netns-cni\x2dc9fd952e\x2d8e4f\x2ddd4e\x2d05b7\x2da9b98e2d0b58.mount: Deactivated successfully. Jun 20 18:28:27.231979 kubelet[3415]: I0620 18:28:27.231935 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zfjh\" (UniqueName: \"kubernetes.io/projected/8ff637cd-0f12-4574-89da-90b39dbb286e-kube-api-access-6zfjh\") pod \"8ff637cd-0f12-4574-89da-90b39dbb286e\" (UID: \"8ff637cd-0f12-4574-89da-90b39dbb286e\") " Jun 20 18:28:27.232379 kubelet[3415]: I0620 18:28:27.232355 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ff637cd-0f12-4574-89da-90b39dbb286e-calico-apiserver-certs\") pod \"8ff637cd-0f12-4574-89da-90b39dbb286e\" (UID: \"8ff637cd-0f12-4574-89da-90b39dbb286e\") " Jun 20 18:28:27.234643 kubelet[3415]: I0620 18:28:27.234603 3415 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff637cd-0f12-4574-89da-90b39dbb286e-kube-api-access-6zfjh" (OuterVolumeSpecName: "kube-api-access-6zfjh") pod "8ff637cd-0f12-4574-89da-90b39dbb286e" (UID: "8ff637cd-0f12-4574-89da-90b39dbb286e"). InnerVolumeSpecName "kube-api-access-6zfjh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:28:27.236708 kubelet[3415]: I0620 18:28:27.236678 3415 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff637cd-0f12-4574-89da-90b39dbb286e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "8ff637cd-0f12-4574-89da-90b39dbb286e" (UID: "8ff637cd-0f12-4574-89da-90b39dbb286e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:28:27.236736 systemd[1]: var-lib-kubelet-pods-8ff637cd\x2d0f12\x2d4574\x2d89da\x2d90b39dbb286e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6zfjh.mount: Deactivated successfully. Jun 20 18:28:27.329375 kubelet[3415]: I0620 18:28:27.329353 3415 scope.go:117] "RemoveContainer" containerID="fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c" Jun 20 18:28:27.332480 containerd[1884]: time="2025-06-20T18:28:27.332444461Z" level=info msg="RemoveContainer for \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\"" Jun 20 18:28:27.333021 kubelet[3415]: I0620 18:28:27.333002 3415 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6zfjh\" (UniqueName: \"kubernetes.io/projected/8ff637cd-0f12-4574-89da-90b39dbb286e-kube-api-access-6zfjh\") on node \"ci-4344.1.0-a-442b0d77ef\" DevicePath \"\"" Jun 20 18:28:27.333131 kubelet[3415]: I0620 18:28:27.333120 3415 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ff637cd-0f12-4574-89da-90b39dbb286e-calico-apiserver-certs\") on node \"ci-4344.1.0-a-442b0d77ef\" DevicePath \"\"" Jun 20 18:28:27.336799 systemd[1]: Removed slice kubepods-besteffort-pod8ff637cd_0f12_4574_89da_90b39dbb286e.slice - libcontainer container kubepods-besteffort-pod8ff637cd_0f12_4574_89da_90b39dbb286e.slice. Jun 20 18:28:27.350008 containerd[1884]: time="2025-06-20T18:28:27.349889583Z" level=info msg="RemoveContainer for \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" returns successfully" Jun 20 18:28:27.350420 kubelet[3415]: I0620 18:28:27.350397 3415 scope.go:117] "RemoveContainer" containerID="fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c" Jun 20 18:28:27.350833 containerd[1884]: time="2025-06-20T18:28:27.350674400Z" level=error msg="ContainerStatus for \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\": not found" Jun 20 18:28:27.351013 kubelet[3415]: E0620 18:28:27.350981 3415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\": not found" containerID="fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c" Jun 20 18:28:27.351167 kubelet[3415]: I0620 18:28:27.351016 3415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c"} err="failed to get container status \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe35eb4f855a206f73162d7018e278b1fb09268d85a5d34b48ce4d293c7e882c\": not found" Jun 20 18:28:27.875155 systemd[1]: var-lib-kubelet-pods-8ff637cd\x2d0f12\x2d4574\x2d89da\x2d90b39dbb286e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 18:28:28.046525 kubelet[3415]: I0620 18:28:28.046441 3415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff637cd-0f12-4574-89da-90b39dbb286e" path="/var/lib/kubelet/pods/8ff637cd-0f12-4574-89da-90b39dbb286e/volumes" Jun 20 18:28:33.324486 update_engine[1874]: I20250620 18:28:33.324405 1874 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:28:33.324862 update_engine[1874]: I20250620 18:28:33.324642 1874 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:28:33.326189 update_engine[1874]: I20250620 18:28:33.324896 1874 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:28:33.429218 update_engine[1874]: E20250620 18:28:33.429131 1874 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:28:33.429218 update_engine[1874]: I20250620 18:28:33.429235 1874 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 18:28:34.175144 containerd[1884]: time="2025-06-20T18:28:34.175095805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"9497fcbc9e5a67277c613de6523e33425f85b6f9586f5a567d9b7ad0506f7f1c\" pid:6393 exited_at:{seconds:1750444114 nanos:174675168}" Jun 20 18:28:36.303124 containerd[1884]: time="2025-06-20T18:28:36.303079504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"ec2ec3b048ddbd3708cb55d66b86ddadc39b3068ecb5c4798f6bd5d680ab8a2a\" pid:6416 exited_at:{seconds:1750444116 nanos:302680524}" Jun 20 18:28:40.324088 containerd[1884]: time="2025-06-20T18:28:40.324040977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"9aa7fe5b4fa2ad6464801bb0fda7261ba6f48e1bb510ba28c85ba13bcb9da75e\" pid:6440 exited_at:{seconds:1750444120 nanos:323839115}" Jun 20 18:28:40.376579 containerd[1884]: time="2025-06-20T18:28:40.376479108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"5ec909796c6d0d0c948e25de0a2e92bed6a694c97202a32f7ba85814043c6d45\" pid:6460 exited_at:{seconds:1750444120 nanos:375452075}" Jun 20 18:28:43.327308 update_engine[1874]: I20250620 18:28:43.327097 1874 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:28:43.327658 update_engine[1874]: I20250620 18:28:43.327346 1874 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:28:43.327658 update_engine[1874]: I20250620 18:28:43.327607 1874 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:28:43.431671 update_engine[1874]: E20250620 18:28:43.431597 1874 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:28:43.431819 update_engine[1874]: I20250620 18:28:43.431699 1874 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 18:28:51.886064 containerd[1884]: time="2025-06-20T18:28:51.885963554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\" id:\"e76dafef4c628018853529870773611dde29e104a06fed43a9d3a3b7e9ab70c4\" pid:6485 exited_at:{seconds:1750444131 nanos:885498059}" Jun 20 18:28:53.325689 update_engine[1874]: I20250620 18:28:53.325610 1874 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:28:53.326059 update_engine[1874]: I20250620 18:28:53.325845 1874 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:28:53.326145 update_engine[1874]: I20250620 18:28:53.326105 1874 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:28:53.338604 update_engine[1874]: E20250620 18:28:53.338567 1874 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:28:53.338702 update_engine[1874]: I20250620 18:28:53.338618 1874 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:28:53.338702 update_engine[1874]: I20250620 18:28:53.338624 1874 omaha_request_action.cc:617] Omaha request response: Jun 20 18:28:53.338735 update_engine[1874]: E20250620 18:28:53.338709 1874 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 18:28:53.338735 update_engine[1874]: I20250620 18:28:53.338724 1874 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 18:28:53.338735 update_engine[1874]: I20250620 18:28:53.338728 1874 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:28:53.338735 update_engine[1874]: I20250620 18:28:53.338732 1874 update_attempter.cc:306] Processing Done. Jun 20 18:28:53.338791 update_engine[1874]: E20250620 18:28:53.338745 1874 update_attempter.cc:619] Update failed. Jun 20 18:28:53.338791 update_engine[1874]: I20250620 18:28:53.338748 1874 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 18:28:53.338791 update_engine[1874]: I20250620 18:28:53.338752 1874 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 18:28:53.338791 update_engine[1874]: I20250620 18:28:53.338755 1874 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 18:28:53.338849 update_engine[1874]: I20250620 18:28:53.338819 1874 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:28:53.338849 update_engine[1874]: I20250620 18:28:53.338835 1874 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:28:53.338849 update_engine[1874]: I20250620 18:28:53.338838 1874 omaha_request_action.cc:272] Request: Jun 20 18:28:53.338849 update_engine[1874]: Jun 20 18:28:53.338849 update_engine[1874]: Jun 20 18:28:53.338849 update_engine[1874]: Jun 20 18:28:53.338849 update_engine[1874]: Jun 20 18:28:53.338849 update_engine[1874]: Jun 20 18:28:53.338849 update_engine[1874]: Jun 20 18:28:53.338849 update_engine[1874]: I20250620 18:28:53.338842 1874 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:28:53.338971 update_engine[1874]: I20250620 18:28:53.338958 1874 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:28:53.339277 update_engine[1874]: I20250620 18:28:53.339136 1874 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:28:53.339350 locksmithd[1983]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 18:28:53.415638 update_engine[1874]: E20250620 18:28:53.415577 1874 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:28:53.415796 update_engine[1874]: I20250620 18:28:53.415659 1874 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:28:53.415796 update_engine[1874]: I20250620 18:28:53.415665 1874 omaha_request_action.cc:617] Omaha request response: Jun 20 18:28:53.415796 update_engine[1874]: I20250620 18:28:53.415670 1874 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:28:53.415796 update_engine[1874]: I20250620 18:28:53.415674 1874 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:28:53.415796 update_engine[1874]: I20250620 18:28:53.415679 1874 update_attempter.cc:306] Processing Done. Jun 20 18:28:53.415796 update_engine[1874]: I20250620 18:28:53.415684 1874 update_attempter.cc:310] Error event sent. Jun 20 18:28:53.415796 update_engine[1874]: I20250620 18:28:53.415694 1874 update_check_scheduler.cc:74] Next update check in 42m44s Jun 20 18:28:53.416104 locksmithd[1983]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 18:29:06.300970 containerd[1884]: time="2025-06-20T18:29:06.300923621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"f79620ea1b6bd8aeb6e38b6bd5e57b5d41fe104ac3c83e54c53e57b78d20687e\" pid:6511 exited_at:{seconds:1750444146 nanos:300484471}" Jun 20 18:29:10.320322 containerd[1884]: time="2025-06-20T18:29:10.320199486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"802de0df621064ec81fdd69414be15a1887e2dd19306d21f04223383ee741e74\" pid:6541 exited_at:{seconds:1750444150 nanos:320002168}" Jun 20 18:29:14.195039 containerd[1884]: time="2025-06-20T18:29:14.194995905Z" level=info msg="StopPodSandbox for \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\"" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.220 [WARNING][6560] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.221 [INFO][6560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.221 [INFO][6560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" iface="eth0" netns="" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.221 [INFO][6560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.221 [INFO][6560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.235 [INFO][6567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.235 [INFO][6567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.235 [INFO][6567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.240 [WARNING][6567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.240 [INFO][6567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.241 [INFO][6567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:29:14.244364 containerd[1884]: 2025-06-20 18:29:14.242 [INFO][6560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:29:14.244364 containerd[1884]: time="2025-06-20T18:29:14.244223322Z" level=info msg="TearDown network for sandbox \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" successfully" Jun 20 18:29:14.244364 containerd[1884]: time="2025-06-20T18:29:14.244248059Z" level=info msg="StopPodSandbox for \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" returns successfully" Jun 20 18:29:14.244992 containerd[1884]: time="2025-06-20T18:29:14.244837972Z" level=info msg="RemovePodSandbox for \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\"" Jun 20 18:29:14.244992 containerd[1884]: time="2025-06-20T18:29:14.244864013Z" level=info msg="Forcibly stopping sandbox \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\"" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.268 [WARNING][6581] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" WorkloadEndpoint="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.268 [INFO][6581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.268 [INFO][6581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" iface="eth0" netns="" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.268 [INFO][6581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.268 [INFO][6581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.282 [INFO][6588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.282 [INFO][6588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.282 [INFO][6588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.287 [WARNING][6588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.287 [INFO][6588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" HandleID="k8s-pod-network.a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Workload="ci--4344.1.0--a--442b0d77ef-k8s-calico--apiserver--68657b97d--rppnt-eth0" Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.288 [INFO][6588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 18:29:14.291247 containerd[1884]: 2025-06-20 18:29:14.289 [INFO][6581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff" Jun 20 18:29:14.291626 containerd[1884]: time="2025-06-20T18:29:14.291252898Z" level=info msg="TearDown network for sandbox \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" successfully" Jun 20 18:29:14.292587 containerd[1884]: time="2025-06-20T18:29:14.292560905Z" level=info msg="Ensure that sandbox a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff in task-service has been cleanup successfully" Jun 20 18:29:14.304849 containerd[1884]: time="2025-06-20T18:29:14.304814965Z" level=info msg="RemovePodSandbox \"a92af5dc57483acb99163a609f84a9d3c6100dfa8adabe878bd165bf0a2a69ff\" returns successfully" Jun 20 18:29:21.889889 containerd[1884]: time="2025-06-20T18:29:21.889845764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\" id:\"16ec1b62aa35ee4f743702edcbedfbb7b1f3246c444569d103b8f9674d8b9a38\" pid:6610 exited_at:{seconds:1750444161 nanos:889255786}" Jun 20 18:29:31.440858 systemd[1]: Started sshd@7-10.200.20.17:22-10.200.16.10:34360.service - OpenSSH per-connection server daemon (10.200.16.10:34360). Jun 20 18:29:31.896997 sshd[6648]: Accepted publickey for core from 10.200.16.10 port 34360 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:31.898458 sshd-session[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:31.902451 systemd-logind[1867]: New session 10 of user core. Jun 20 18:29:31.909443 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:29:32.295619 sshd[6650]: Connection closed by 10.200.16.10 port 34360 Jun 20 18:29:32.296132 sshd-session[6648]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:32.300023 systemd[1]: sshd@7-10.200.20.17:22-10.200.16.10:34360.service: Deactivated successfully. Jun 20 18:29:32.301911 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:29:32.302687 systemd-logind[1867]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:29:32.304383 systemd-logind[1867]: Removed session 10. Jun 20 18:29:34.168262 containerd[1884]: time="2025-06-20T18:29:34.168145950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"8a76dd186ec95c45330ee540ba99581edfd06f919cb26b4b480b411b7ea9ca2d\" pid:6675 exited_at:{seconds:1750444174 nanos:167297524}" Jun 20 18:29:36.328526 containerd[1884]: time="2025-06-20T18:29:36.327892168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"f0c3046434a60453d73da86a0cdff94b4524a3a584806dd32409cea51c0fd075\" pid:6697 exited_at:{seconds:1750444176 nanos:327553849}" Jun 20 18:29:37.386107 systemd[1]: Started sshd@8-10.200.20.17:22-10.200.16.10:34374.service - OpenSSH per-connection server daemon (10.200.16.10:34374). Jun 20 18:29:37.877391 sshd[6708]: Accepted publickey for core from 10.200.16.10 port 34374 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:37.878741 sshd-session[6708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:37.884610 systemd-logind[1867]: New session 11 of user core. Jun 20 18:29:37.889402 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:29:38.282403 sshd[6710]: Connection closed by 10.200.16.10 port 34374 Jun 20 18:29:38.283161 sshd-session[6708]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:38.286423 systemd[1]: sshd@8-10.200.20.17:22-10.200.16.10:34374.service: Deactivated successfully. Jun 20 18:29:38.288199 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:29:38.288872 systemd-logind[1867]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:29:38.290039 systemd-logind[1867]: Removed session 11. Jun 20 18:29:40.318772 containerd[1884]: time="2025-06-20T18:29:40.318699349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"1b74b4a5810ccaa1e03c5c42c0b88ae5e9350ee238ed9fbec8451951faefdd38\" pid:6735 exited_at:{seconds:1750444180 nanos:318539800}" Jun 20 18:29:40.363762 containerd[1884]: time="2025-06-20T18:29:40.363726749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"8c1f386480ea4c94cf06d6998f0b314b71e22a4bae9e53f679002b613cf9fc29\" pid:6756 exited_at:{seconds:1750444180 nanos:363262862}" Jun 20 18:29:43.365757 systemd[1]: Started sshd@9-10.200.20.17:22-10.200.16.10:41782.service - OpenSSH per-connection server daemon (10.200.16.10:41782). Jun 20 18:29:43.820919 sshd[6766]: Accepted publickey for core from 10.200.16.10 port 41782 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:43.822253 sshd-session[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:43.826245 systemd-logind[1867]: New session 12 of user core. Jun 20 18:29:43.834407 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:29:44.208510 sshd[6768]: Connection closed by 10.200.16.10 port 41782 Jun 20 18:29:44.209104 sshd-session[6766]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:44.212816 systemd[1]: sshd@9-10.200.20.17:22-10.200.16.10:41782.service: Deactivated successfully. Jun 20 18:29:44.215122 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:29:44.216867 systemd-logind[1867]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:29:44.218180 systemd-logind[1867]: Removed session 12. Jun 20 18:29:44.290814 systemd[1]: Started sshd@10-10.200.20.17:22-10.200.16.10:41796.service - OpenSSH per-connection server daemon (10.200.16.10:41796). Jun 20 18:29:44.743525 sshd[6780]: Accepted publickey for core from 10.200.16.10 port 41796 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:44.744830 sshd-session[6780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:44.748628 systemd-logind[1867]: New session 13 of user core. Jun 20 18:29:44.755425 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:29:45.161236 sshd[6782]: Connection closed by 10.200.16.10 port 41796 Jun 20 18:29:45.161791 sshd-session[6780]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:45.165197 systemd[1]: sshd@10-10.200.20.17:22-10.200.16.10:41796.service: Deactivated successfully. Jun 20 18:29:45.167412 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:29:45.168243 systemd-logind[1867]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:29:45.169692 systemd-logind[1867]: Removed session 13. Jun 20 18:29:45.255392 systemd[1]: Started sshd@11-10.200.20.17:22-10.200.16.10:41800.service - OpenSSH per-connection server daemon (10.200.16.10:41800). Jun 20 18:29:45.713727 sshd[6792]: Accepted publickey for core from 10.200.16.10 port 41800 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:45.714948 sshd-session[6792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:45.719507 systemd-logind[1867]: New session 14 of user core. Jun 20 18:29:45.723444 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:29:46.093742 sshd[6794]: Connection closed by 10.200.16.10 port 41800 Jun 20 18:29:46.093234 sshd-session[6792]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:46.096226 systemd[1]: sshd@11-10.200.20.17:22-10.200.16.10:41800.service: Deactivated successfully. Jun 20 18:29:46.097874 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:29:46.098569 systemd-logind[1867]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:29:46.100044 systemd-logind[1867]: Removed session 14. Jun 20 18:29:51.175350 systemd[1]: Started sshd@12-10.200.20.17:22-10.200.16.10:51104.service - OpenSSH per-connection server daemon (10.200.16.10:51104). Jun 20 18:29:51.633928 sshd[6814]: Accepted publickey for core from 10.200.16.10 port 51104 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:51.635224 sshd-session[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:51.639000 systemd-logind[1867]: New session 15 of user core. Jun 20 18:29:51.645425 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:29:51.896005 containerd[1884]: time="2025-06-20T18:29:51.895887554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\" id:\"680c0f1390228d1e492854ddee7f08656ef133071d3a0d1c43ed413ec4a88b6f\" pid:6830 exited_at:{seconds:1750444191 nanos:895593370}" Jun 20 18:29:52.027351 sshd[6816]: Connection closed by 10.200.16.10 port 51104 Jun 20 18:29:52.027883 sshd-session[6814]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:52.031388 systemd[1]: sshd@12-10.200.20.17:22-10.200.16.10:51104.service: Deactivated successfully. Jun 20 18:29:52.031574 systemd-logind[1867]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:29:52.034569 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:29:52.037825 systemd-logind[1867]: Removed session 15. Jun 20 18:29:57.121157 systemd[1]: Started sshd@13-10.200.20.17:22-10.200.16.10:51118.service - OpenSSH per-connection server daemon (10.200.16.10:51118). Jun 20 18:29:57.576698 sshd[6855]: Accepted publickey for core from 10.200.16.10 port 51118 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:29:57.577970 sshd-session[6855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:29:57.582387 systemd-logind[1867]: New session 16 of user core. Jun 20 18:29:57.587428 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:29:57.953470 sshd[6857]: Connection closed by 10.200.16.10 port 51118 Jun 20 18:29:57.953650 sshd-session[6855]: pam_unix(sshd:session): session closed for user core Jun 20 18:29:57.956258 systemd[1]: sshd@13-10.200.20.17:22-10.200.16.10:51118.service: Deactivated successfully. Jun 20 18:29:57.958925 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:29:57.961168 systemd-logind[1867]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:29:57.962193 systemd-logind[1867]: Removed session 16. Jun 20 18:30:03.037888 systemd[1]: Started sshd@14-10.200.20.17:22-10.200.16.10:52494.service - OpenSSH per-connection server daemon (10.200.16.10:52494). Jun 20 18:30:03.491422 sshd[6870]: Accepted publickey for core from 10.200.16.10 port 52494 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:03.492778 sshd-session[6870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:03.496752 systemd-logind[1867]: New session 17 of user core. Jun 20 18:30:03.503451 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:30:03.881248 sshd[6872]: Connection closed by 10.200.16.10 port 52494 Jun 20 18:30:03.881737 sshd-session[6870]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:03.886819 systemd[1]: sshd@14-10.200.20.17:22-10.200.16.10:52494.service: Deactivated successfully. Jun 20 18:30:03.888826 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:30:03.889925 systemd-logind[1867]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:30:03.892010 systemd-logind[1867]: Removed session 17. Jun 20 18:30:06.307266 containerd[1884]: time="2025-06-20T18:30:06.307217542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"ebe17e1ca158f6026bea3bb0f7d176944dd07ffd8cfa3fc0a057ee1fbb535f62\" pid:6895 exited_at:{seconds:1750444206 nanos:306889108}" Jun 20 18:30:08.964131 systemd[1]: Started sshd@15-10.200.20.17:22-10.200.16.10:39152.service - OpenSSH per-connection server daemon (10.200.16.10:39152). Jun 20 18:30:09.419194 sshd[6905]: Accepted publickey for core from 10.200.16.10 port 39152 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:09.420801 sshd-session[6905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:09.424787 systemd-logind[1867]: New session 18 of user core. Jun 20 18:30:09.431406 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:30:09.808879 sshd[6907]: Connection closed by 10.200.16.10 port 39152 Jun 20 18:30:09.807914 sshd-session[6905]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:09.811632 systemd[1]: sshd@15-10.200.20.17:22-10.200.16.10:39152.service: Deactivated successfully. Jun 20 18:30:09.813183 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:30:09.813835 systemd-logind[1867]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:30:09.816163 systemd-logind[1867]: Removed session 18. Jun 20 18:30:09.895497 systemd[1]: Started sshd@16-10.200.20.17:22-10.200.16.10:39160.service - OpenSSH per-connection server daemon (10.200.16.10:39160). Jun 20 18:30:10.327997 containerd[1884]: time="2025-06-20T18:30:10.327956762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"807c3591ba2fe8706969c687198892d1859c2f59577a9c76412c33c621d4b7d9\" pid:6934 exited_at:{seconds:1750444210 nanos:327441523}" Jun 20 18:30:10.352650 sshd[6919]: Accepted publickey for core from 10.200.16.10 port 39160 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:10.355099 sshd-session[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:10.360876 systemd-logind[1867]: New session 19 of user core. Jun 20 18:30:10.364425 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:30:10.822022 sshd[6943]: Connection closed by 10.200.16.10 port 39160 Jun 20 18:30:10.822891 sshd-session[6919]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:10.826484 systemd-logind[1867]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:30:10.827013 systemd[1]: sshd@16-10.200.20.17:22-10.200.16.10:39160.service: Deactivated successfully. Jun 20 18:30:10.828972 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:30:10.830757 systemd-logind[1867]: Removed session 19. Jun 20 18:30:10.911062 systemd[1]: Started sshd@17-10.200.20.17:22-10.200.16.10:39176.service - OpenSSH per-connection server daemon (10.200.16.10:39176). Jun 20 18:30:11.406633 sshd[6953]: Accepted publickey for core from 10.200.16.10 port 39176 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:11.408030 sshd-session[6953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:11.411964 systemd-logind[1867]: New session 20 of user core. Jun 20 18:30:11.421575 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:30:12.554913 sshd[6955]: Connection closed by 10.200.16.10 port 39176 Jun 20 18:30:12.555274 sshd-session[6953]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:12.559027 systemd[1]: sshd@17-10.200.20.17:22-10.200.16.10:39176.service: Deactivated successfully. Jun 20 18:30:12.561486 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:30:12.562701 systemd-logind[1867]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:30:12.564315 systemd-logind[1867]: Removed session 20. Jun 20 18:30:12.646715 systemd[1]: Started sshd@18-10.200.20.17:22-10.200.16.10:39182.service - OpenSSH per-connection server daemon (10.200.16.10:39182). Jun 20 18:30:13.125000 sshd[6973]: Accepted publickey for core from 10.200.16.10 port 39182 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:13.126351 sshd-session[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:13.130272 systemd-logind[1867]: New session 21 of user core. Jun 20 18:30:13.137425 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:30:13.604885 sshd[6975]: Connection closed by 10.200.16.10 port 39182 Jun 20 18:30:13.611999 sshd-session[6973]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:13.615245 systemd[1]: sshd@18-10.200.20.17:22-10.200.16.10:39182.service: Deactivated successfully. Jun 20 18:30:13.616949 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:30:13.617649 systemd-logind[1867]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:30:13.619171 systemd-logind[1867]: Removed session 21. Jun 20 18:30:13.687925 systemd[1]: Started sshd@19-10.200.20.17:22-10.200.16.10:39196.service - OpenSSH per-connection server daemon (10.200.16.10:39196). Jun 20 18:30:14.145926 sshd[6986]: Accepted publickey for core from 10.200.16.10 port 39196 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:14.147227 sshd-session[6986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:14.151535 systemd-logind[1867]: New session 22 of user core. Jun 20 18:30:14.155430 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:30:14.520271 sshd[6990]: Connection closed by 10.200.16.10 port 39196 Jun 20 18:30:14.521084 sshd-session[6986]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:14.524676 systemd-logind[1867]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:30:14.524843 systemd[1]: sshd@19-10.200.20.17:22-10.200.16.10:39196.service: Deactivated successfully. Jun 20 18:30:14.528112 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:30:14.529621 systemd-logind[1867]: Removed session 22. Jun 20 18:30:19.606349 systemd[1]: Started sshd@20-10.200.20.17:22-10.200.16.10:48776.service - OpenSSH per-connection server daemon (10.200.16.10:48776). Jun 20 18:30:20.084666 sshd[7003]: Accepted publickey for core from 10.200.16.10 port 48776 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:20.086382 sshd-session[7003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:20.090340 systemd-logind[1867]: New session 23 of user core. Jun 20 18:30:20.094428 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:30:20.474581 sshd[7007]: Connection closed by 10.200.16.10 port 48776 Jun 20 18:30:20.475062 sshd-session[7003]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:20.478200 systemd[1]: sshd@20-10.200.20.17:22-10.200.16.10:48776.service: Deactivated successfully. Jun 20 18:30:20.479958 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:30:20.480579 systemd-logind[1867]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:30:20.481796 systemd-logind[1867]: Removed session 23. Jun 20 18:30:21.935612 containerd[1884]: time="2025-06-20T18:30:21.935570466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"11a63e072313eecbcfaa12302713c377532685d3a1d3a1883019ff3c3f601c96\" id:\"04226df5a4cb5877a55bd5d2eb452960c07b9c23fc2faea8f8b48a9f42467a80\" pid:7029 exited_at:{seconds:1750444221 nanos:935101308}" Jun 20 18:30:25.560526 systemd[1]: Started sshd@21-10.200.20.17:22-10.200.16.10:48790.service - OpenSSH per-connection server daemon (10.200.16.10:48790). Jun 20 18:30:26.011636 sshd[7042]: Accepted publickey for core from 10.200.16.10 port 48790 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:26.013226 sshd-session[7042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:26.018464 systemd-logind[1867]: New session 24 of user core. Jun 20 18:30:26.025421 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:30:26.395010 sshd[7044]: Connection closed by 10.200.16.10 port 48790 Jun 20 18:30:26.395596 sshd-session[7042]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:26.398765 systemd[1]: sshd@21-10.200.20.17:22-10.200.16.10:48790.service: Deactivated successfully. Jun 20 18:30:26.400448 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:30:26.401192 systemd-logind[1867]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:30:26.402574 systemd-logind[1867]: Removed session 24. Jun 20 18:30:31.483360 systemd[1]: Started sshd@22-10.200.20.17:22-10.200.16.10:52240.service - OpenSSH per-connection server daemon (10.200.16.10:52240). Jun 20 18:30:31.975326 sshd[7062]: Accepted publickey for core from 10.200.16.10 port 52240 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:31.976547 sshd-session[7062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:31.980560 systemd-logind[1867]: New session 25 of user core. Jun 20 18:30:31.987431 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:30:32.372337 sshd[7064]: Connection closed by 10.200.16.10 port 52240 Jun 20 18:30:32.372922 sshd-session[7062]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:32.376510 systemd-logind[1867]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:30:32.377113 systemd[1]: sshd@22-10.200.20.17:22-10.200.16.10:52240.service: Deactivated successfully. Jun 20 18:30:32.378892 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:30:32.380681 systemd-logind[1867]: Removed session 25. Jun 20 18:30:34.169602 containerd[1884]: time="2025-06-20T18:30:34.169559356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"eea6f0d8a25e36cdf754b8f0adb7732d6d3b7ecd3db672e7507c7ab354fe414a\" pid:7088 exited_at:{seconds:1750444234 nanos:169275579}" Jun 20 18:30:36.304088 containerd[1884]: time="2025-06-20T18:30:36.304033167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a23c8008d19994e40f3942524bf34bc8e1966e9c5940e7fd6e6226144bd07f04\" id:\"79913b97694eb53ac2c86d150b562527bf4170edbea34377db635a6f2e1311fa\" pid:7110 exited_at:{seconds:1750444236 nanos:303602690}" Jun 20 18:30:37.459462 systemd[1]: Started sshd@23-10.200.20.17:22-10.200.16.10:52244.service - OpenSSH per-connection server daemon (10.200.16.10:52244). Jun 20 18:30:37.931148 sshd[7121]: Accepted publickey for core from 10.200.16.10 port 52244 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:37.933126 sshd-session[7121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:37.939487 systemd-logind[1867]: New session 26 of user core. Jun 20 18:30:37.943422 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:30:38.330489 sshd[7123]: Connection closed by 10.200.16.10 port 52244 Jun 20 18:30:38.331485 sshd-session[7121]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:38.335392 systemd[1]: sshd@23-10.200.20.17:22-10.200.16.10:52244.service: Deactivated successfully. Jun 20 18:30:38.337201 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:30:38.338037 systemd-logind[1867]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:30:38.340087 systemd-logind[1867]: Removed session 26. Jun 20 18:30:40.321129 containerd[1884]: time="2025-06-20T18:30:40.321090604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"ce1bbb24782c3e19a1dcd162ed7ff4716edd58d7710bb6766c3115ba944697d0\" pid:7147 exited_at:{seconds:1750444240 nanos:320644014}" Jun 20 18:30:40.363484 containerd[1884]: time="2025-06-20T18:30:40.363440723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1bf0275bf65f547dc1d8cbca0c48ba0f85eb1af90f520cd898f55888085fd79\" id:\"76bd82986e6ed6095102e0a5c39792b8b92cd3d3e23bc064aa2e1a569b461590\" pid:7168 exited_at:{seconds:1750444240 nanos:363236397}" Jun 20 18:30:43.423717 systemd[1]: Started sshd@24-10.200.20.17:22-10.200.16.10:51722.service - OpenSSH per-connection server daemon (10.200.16.10:51722). Jun 20 18:30:43.882219 sshd[7179]: Accepted publickey for core from 10.200.16.10 port 51722 ssh2: RSA SHA256:MZTQF29f0y7iei/0IQ9xOnEIftBAF0HraXVnrH3QUGc Jun 20 18:30:43.884726 sshd-session[7179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:30:43.889065 systemd-logind[1867]: New session 27 of user core. Jun 20 18:30:43.898432 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:30:44.272414 sshd[7182]: Connection closed by 10.200.16.10 port 51722 Jun 20 18:30:44.273064 sshd-session[7179]: pam_unix(sshd:session): session closed for user core Jun 20 18:30:44.276664 systemd[1]: sshd@24-10.200.20.17:22-10.200.16.10:51722.service: Deactivated successfully. Jun 20 18:30:44.278334 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:30:44.278976 systemd-logind[1867]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:30:44.280344 systemd-logind[1867]: Removed session 27.