Aug 13 00:20:44.317312 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:20:44.317334 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:20:44.317342 kernel: KASLR enabled Aug 13 00:20:44.317348 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 13 00:20:44.317356 kernel: printk: bootconsole [pl11] enabled Aug 13 00:20:44.317361 kernel: efi: EFI v2.7 by EDK II Aug 13 00:20:44.317369 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Aug 13 00:20:44.317375 kernel: random: crng init done Aug 13 00:20:44.317381 kernel: ACPI: Early table checksum verification disabled Aug 13 00:20:44.317387 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Aug 13 00:20:44.317393 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317399 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317407 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:20:44.317413 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317420 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317427 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317433 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317441 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317447 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317454 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 13 00:20:44.317460 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317467 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 13 00:20:44.317473 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 13 00:20:44.317479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Aug 13 00:20:44.317486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Aug 13 00:20:44.317492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Aug 13 00:20:44.317498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Aug 13 00:20:44.317505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Aug 13 00:20:44.317513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Aug 13 00:20:44.317519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Aug 13 00:20:44.317526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Aug 13 00:20:44.317532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Aug 13 00:20:44.317538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Aug 13 00:20:44.317545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Aug 13 00:20:44.317581 kernel: NUMA: NODE_DATA [mem 0x1bf7ed800-0x1bf7f2fff] Aug 13 00:20:44.317588 kernel: Zone ranges: Aug 13 00:20:44.317594 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 13 00:20:44.317600 kernel: DMA32 empty Aug 13 00:20:44.317607 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:20:44.317613 kernel: Movable zone start for each node Aug 13 00:20:44.317624 kernel: Early memory node ranges Aug 13 00:20:44.317631 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 13 00:20:44.317638 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Aug 13 00:20:44.317645 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Aug 13 00:20:44.317652 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Aug 13 00:20:44.317660 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Aug 13 00:20:44.317667 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Aug 13 00:20:44.317673 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:20:44.317680 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:20:44.317687 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 13 00:20:44.317693 kernel: psci: probing for conduit method from ACPI. Aug 13 00:20:44.317700 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:20:44.317707 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:20:44.317713 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 13 00:20:44.317720 kernel: psci: SMC Calling Convention v1.4 Aug 13 00:20:44.317727 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Aug 13 00:20:44.317734 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Aug 13 00:20:44.317742 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:20:44.317749 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:20:44.317756 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:20:44.317762 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:20:44.317769 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:20:44.317776 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:20:44.317783 kernel: CPU features: detected: Spectre-BHB Aug 13 00:20:44.317789 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:20:44.317796 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:20:44.317803 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:20:44.317810 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 13 00:20:44.317818 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:20:44.317825 kernel: alternatives: applying boot alternatives Aug 13 00:20:44.317833 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:20:44.317840 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:20:44.317847 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:20:44.317854 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:20:44.317861 kernel: Fallback order for Node 0: 0 Aug 13 00:20:44.317867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 13 00:20:44.317874 kernel: Policy zone: Normal Aug 13 00:20:44.317881 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:20:44.317888 kernel: software IO TLB: area num 2. Aug 13 00:20:44.317896 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Aug 13 00:20:44.317903 kernel: Memory: 3982620K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211540K reserved, 0K cma-reserved) Aug 13 00:20:44.317910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:20:44.317916 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:20:44.317924 kernel: rcu: RCU event tracing is enabled. Aug 13 00:20:44.317931 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:20:44.317938 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:20:44.317945 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:20:44.317952 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:20:44.317958 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:20:44.317965 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:20:44.317973 kernel: GICv3: 960 SPIs implemented Aug 13 00:20:44.317980 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:20:44.317986 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:20:44.317993 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:20:44.318000 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 13 00:20:44.318007 kernel: ITS: No ITS available, not enabling LPIs Aug 13 00:20:44.318013 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:20:44.318021 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:20:44.318027 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:20:44.318034 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:20:44.318041 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:20:44.318050 kernel: Console: colour dummy device 80x25 Aug 13 00:20:44.318057 kernel: printk: console [tty1] enabled Aug 13 00:20:44.318064 kernel: ACPI: Core revision 20230628 Aug 13 00:20:44.318071 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:20:44.318078 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:20:44.318085 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:20:44.318092 kernel: landlock: Up and running. Aug 13 00:20:44.318099 kernel: SELinux: Initializing. Aug 13 00:20:44.318106 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.318113 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.318121 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:20:44.318128 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:20:44.318136 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 13 00:20:44.318142 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Aug 13 00:20:44.318149 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 00:20:44.318156 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:20:44.318164 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:20:44.318177 kernel: Remapping and enabling EFI services. Aug 13 00:20:44.318184 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:20:44.318191 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:20:44.318199 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 13 00:20:44.318207 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:20:44.318215 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:20:44.318222 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:20:44.318229 kernel: SMP: Total of 2 processors activated. Aug 13 00:20:44.318237 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:20:44.318246 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 13 00:20:44.318253 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:20:44.318261 kernel: CPU features: detected: CRC32 instructions Aug 13 00:20:44.318268 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:20:44.318275 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:20:44.318282 kernel: CPU features: detected: Privileged Access Never Aug 13 00:20:44.318290 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:20:44.318297 kernel: alternatives: applying system-wide alternatives Aug 13 00:20:44.318304 kernel: devtmpfs: initialized Aug 13 00:20:44.318313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:20:44.318320 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:20:44.318328 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:20:44.318335 kernel: SMBIOS 3.1.0 present. Aug 13 00:20:44.318342 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 00:20:44.318350 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:20:44.318357 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:20:44.318364 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:20:44.318372 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:20:44.318380 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:20:44.318388 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Aug 13 00:20:44.318395 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:20:44.318402 kernel: cpuidle: using governor menu Aug 13 00:20:44.318409 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:20:44.318417 kernel: ASID allocator initialised with 32768 entries Aug 13 00:20:44.318424 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:20:44.318431 kernel: Serial: AMBA PL011 UART driver Aug 13 00:20:44.318439 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:20:44.318447 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:20:44.318455 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:20:44.318462 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:20:44.318470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:20:44.318477 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:20:44.318484 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:20:44.318492 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:20:44.318499 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:20:44.318506 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:20:44.318515 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:20:44.318522 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:20:44.318530 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:20:44.318537 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:20:44.318544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:20:44.323123 kernel: ACPI: Interpreter enabled Aug 13 00:20:44.323133 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:20:44.323141 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:20:44.323148 kernel: printk: console [ttyAMA0] enabled Aug 13 00:20:44.323163 kernel: printk: bootconsole [pl11] disabled Aug 13 00:20:44.323171 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 13 00:20:44.323178 kernel: iommu: Default domain type: Translated Aug 13 00:20:44.323186 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:20:44.323193 kernel: efivars: Registered efivars operations Aug 13 00:20:44.323200 kernel: vgaarb: loaded Aug 13 00:20:44.323207 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:20:44.323215 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:20:44.323222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:20:44.323231 kernel: pnp: PnP ACPI init Aug 13 00:20:44.323239 kernel: pnp: PnP ACPI: found 0 devices Aug 13 00:20:44.323246 kernel: NET: Registered PF_INET protocol family Aug 13 00:20:44.323254 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:20:44.323262 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:20:44.323269 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:20:44.323276 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:20:44.323284 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:20:44.323291 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:20:44.323300 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.323308 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.323315 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:20:44.323323 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:20:44.323330 kernel: kvm [1]: HYP mode not available Aug 13 00:20:44.323337 kernel: Initialise system trusted keyrings Aug 13 00:20:44.323345 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:20:44.323352 kernel: Key type asymmetric registered Aug 13 00:20:44.323359 kernel: Asymmetric key parser 'x509' registered Aug 13 00:20:44.323368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:20:44.323375 kernel: io scheduler mq-deadline registered Aug 13 00:20:44.323383 kernel: io scheduler kyber registered Aug 13 00:20:44.323390 kernel: io scheduler bfq registered Aug 13 00:20:44.323397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:20:44.323405 kernel: thunder_xcv, ver 1.0 Aug 13 00:20:44.323412 kernel: thunder_bgx, ver 1.0 Aug 13 00:20:44.323419 kernel: nicpf, ver 1.0 Aug 13 00:20:44.323426 kernel: nicvf, ver 1.0 Aug 13 00:20:44.323627 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:20:44.323708 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:20:43 UTC (1755044443) Aug 13 00:20:44.323719 kernel: efifb: probing for efifb Aug 13 00:20:44.323726 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:20:44.323734 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:20:44.323741 kernel: efifb: scrolling: redraw Aug 13 00:20:44.323749 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:20:44.323756 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:20:44.323766 kernel: fb0: EFI VGA frame buffer device Aug 13 00:20:44.323774 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 13 00:20:44.323781 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:20:44.323789 kernel: No ACPI PMU IRQ for CPU0 Aug 13 00:20:44.323796 kernel: No ACPI PMU IRQ for CPU1 Aug 13 00:20:44.323803 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 13 00:20:44.323811 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:20:44.323818 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:20:44.323825 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:20:44.323835 kernel: Segment Routing with IPv6 Aug 13 00:20:44.323842 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:20:44.323849 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:20:44.323857 kernel: Key type dns_resolver registered Aug 13 00:20:44.323864 kernel: registered taskstats version 1 Aug 13 00:20:44.323871 kernel: Loading compiled-in X.509 certificates Aug 13 00:20:44.323879 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:20:44.323886 kernel: Key type .fscrypt registered Aug 13 00:20:44.323893 kernel: Key type fscrypt-provisioning registered Aug 13 00:20:44.323902 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:20:44.323909 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:20:44.323917 kernel: ima: No architecture policies found Aug 13 00:20:44.323924 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:20:44.323932 kernel: clk: Disabling unused clocks Aug 13 00:20:44.323939 kernel: Freeing unused kernel memory: 39424K Aug 13 00:20:44.323946 kernel: Run /init as init process Aug 13 00:20:44.323958 kernel: with arguments: Aug 13 00:20:44.323966 kernel: /init Aug 13 00:20:44.323974 kernel: with environment: Aug 13 00:20:44.323982 kernel: HOME=/ Aug 13 00:20:44.323989 kernel: TERM=linux Aug 13 00:20:44.323997 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:20:44.324006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:20:44.324016 systemd[1]: Detected virtualization microsoft. Aug 13 00:20:44.324024 systemd[1]: Detected architecture arm64. Aug 13 00:20:44.324031 systemd[1]: Running in initrd. Aug 13 00:20:44.324041 systemd[1]: No hostname configured, using default hostname. Aug 13 00:20:44.324049 systemd[1]: Hostname set to . Aug 13 00:20:44.324057 systemd[1]: Initializing machine ID from random generator. Aug 13 00:20:44.324065 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:20:44.324073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:44.324081 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:44.324090 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:20:44.324099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:20:44.324108 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:20:44.324117 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:20:44.324126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:20:44.324134 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:20:44.324142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:44.324150 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:44.324159 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:20:44.324167 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:20:44.324175 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:20:44.324183 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:20:44.324191 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:44.324199 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:44.324207 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:20:44.324215 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:20:44.324223 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:44.324232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:44.324240 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:44.324248 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:20:44.324256 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:20:44.324265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:20:44.324273 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:20:44.324281 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:20:44.324289 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:20:44.324297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:20:44.324324 systemd-journald[217]: Collecting audit messages is disabled. Aug 13 00:20:44.324344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:44.324353 systemd-journald[217]: Journal started Aug 13 00:20:44.324373 systemd-journald[217]: Runtime Journal (/run/log/journal/866183bbb6a14c2ab62a9eb9c46031da) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:20:44.325102 systemd-modules-load[218]: Inserted module 'overlay' Aug 13 00:20:44.348402 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:20:44.351985 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:44.373867 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:20:44.373906 kernel: Bridge firewalling registered Aug 13 00:20:44.377316 systemd-modules-load[218]: Inserted module 'br_netfilter' Aug 13 00:20:44.378829 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:44.389831 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:20:44.401717 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:44.412451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:44.432831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:44.454111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:20:44.466759 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:20:44.495726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:20:44.506935 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:44.519428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:44.526265 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:20:44.543507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:44.575956 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:20:44.584730 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:20:44.609125 dracut-cmdline[248]: dracut-dracut-053 Aug 13 00:20:44.612809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:20:44.630591 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:20:44.662441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:44.667537 systemd-resolved[250]: Positive Trust Anchors: Aug 13 00:20:44.667562 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:20:44.667595 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:20:44.669807 systemd-resolved[250]: Defaulting to hostname 'linux'. Aug 13 00:20:44.678218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:20:44.684990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:44.783569 kernel: SCSI subsystem initialized Aug 13 00:20:44.790580 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:20:44.801584 kernel: iscsi: registered transport (tcp) Aug 13 00:20:44.821273 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:20:44.821340 kernel: QLogic iSCSI HBA Driver Aug 13 00:20:44.861428 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:44.883710 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:20:44.915817 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:20:44.915862 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:20:44.922929 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:20:44.972575 kernel: raid6: neonx8 gen() 15780 MB/s Aug 13 00:20:44.992571 kernel: raid6: neonx4 gen() 15629 MB/s Aug 13 00:20:45.012566 kernel: raid6: neonx2 gen() 13236 MB/s Aug 13 00:20:45.033568 kernel: raid6: neonx1 gen() 10478 MB/s Aug 13 00:20:45.053561 kernel: raid6: int64x8 gen() 6960 MB/s Aug 13 00:20:45.073565 kernel: raid6: int64x4 gen() 7350 MB/s Aug 13 00:20:45.094566 kernel: raid6: int64x2 gen() 6133 MB/s Aug 13 00:20:45.118516 kernel: raid6: int64x1 gen() 5058 MB/s Aug 13 00:20:45.118577 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Aug 13 00:20:45.143275 kernel: raid6: .... xor() 11903 MB/s, rmw enabled Aug 13 00:20:45.143324 kernel: raid6: using neon recovery algorithm Aug 13 00:20:45.152563 kernel: xor: measuring software checksum speed Aug 13 00:20:45.159525 kernel: 8regs : 18696 MB/sec Aug 13 00:20:45.159559 kernel: 32regs : 19603 MB/sec Aug 13 00:20:45.162935 kernel: arm64_neon : 27132 MB/sec Aug 13 00:20:45.167169 kernel: xor: using function: arm64_neon (27132 MB/sec) Aug 13 00:20:45.217573 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:20:45.228425 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:45.243683 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:45.266114 systemd-udevd[435]: Using default interface naming scheme 'v255'. Aug 13 00:20:45.271835 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:45.288764 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:20:45.308095 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Aug 13 00:20:45.334941 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:45.349795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:20:45.386378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:45.408774 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:20:45.435587 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:45.443426 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:45.463784 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:45.480424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:20:45.503788 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:20:45.519235 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:45.537567 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 00:20:45.519393 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:45.530898 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:45.593744 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:20:45.593773 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:20:45.593784 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:20:45.593793 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:20:45.549618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:45.611446 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:20:45.549840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:45.636743 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Aug 13 00:20:45.636776 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Aug 13 00:20:45.578654 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:45.659890 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:20:45.660040 kernel: PTP clock support registered Aug 13 00:20:45.645731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:45.705636 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:20:45.705662 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:20:45.705672 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:20:45.705681 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:20:45.705690 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:20:45.670396 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:45.686000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:45.360084 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:20:45.360101 kernel: scsi host0: storvsc_host_t Aug 13 00:20:45.360230 systemd-journald[217]: Time jumped backwards, rotating. Aug 13 00:20:45.360279 kernel: scsi host1: storvsc_host_t Aug 13 00:20:45.686107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:45.404356 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:20:45.404398 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: VF slot 1 added Aug 13 00:20:45.404565 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:20:45.334289 systemd-resolved[250]: Clock change detected. Flushing caches. Aug 13 00:20:45.358649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:45.443395 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:20:45.443419 kernel: hv_pci 7b6b2293-5bb3-49e9-9276-be969652f9bc: PCI VMBus probing: Using version 0x10004 Aug 13 00:20:45.443604 kernel: hv_pci 7b6b2293-5bb3-49e9-9276-be969652f9bc: PCI host bridge to bus 5bb3:00 Aug 13 00:20:45.396283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:45.478236 kernel: pci_bus 5bb3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 13 00:20:45.478422 kernel: pci_bus 5bb3:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:20:45.478511 kernel: pci 5bb3:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 13 00:20:45.429823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:45.505353 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:20:45.505553 kernel: pci 5bb3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:20:45.505580 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:20:45.505589 kernel: pci 5bb3:00:02.0: enabling Extended Tags Aug 13 00:20:45.509795 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:20:45.514657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:45.548356 kernel: pci 5bb3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5bb3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 13 00:20:45.548605 kernel: pci_bus 5bb3:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:20:45.549225 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:20:45.559824 kernel: pci 5bb3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:20:45.559978 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:20:45.568947 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:20:45.576176 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:20:45.576340 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:20:45.585652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:45.590658 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:20:45.631096 kernel: mlx5_core 5bb3:00:02.0: enabling device (0000 -> 0002) Aug 13 00:20:45.644676 kernel: mlx5_core 5bb3:00:02.0: firmware version: 16.30.1284 Aug 13 00:20:45.851243 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: VF registering: eth1 Aug 13 00:20:45.851452 kernel: mlx5_core 5bb3:00:02.0 eth1: joined to eth0 Aug 13 00:20:45.858800 kernel: mlx5_core 5bb3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Aug 13 00:20:45.870658 kernel: mlx5_core 5bb3:00:02.0 enP23475s1: renamed from eth1 Aug 13 00:20:46.016849 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 00:20:46.103756 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (492) Aug 13 00:20:46.118378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 00:20:46.144825 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (498) Aug 13 00:20:46.160740 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 00:20:46.178649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 00:20:46.185867 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 00:20:46.217925 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:20:46.246309 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:46.253652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:46.264673 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:47.265666 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:47.266719 disk-uuid[607]: The operation has completed successfully. Aug 13 00:20:47.338270 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:20:47.338382 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:20:47.367834 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:20:47.380886 sh[720]: Success Aug 13 00:20:47.409667 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:20:47.731041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:20:47.740767 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:20:47.752058 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:20:47.792786 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:20:47.792842 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:47.800746 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:20:47.805783 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:20:47.810109 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:20:48.231620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:20:48.237280 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:20:48.257031 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:20:48.264817 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:20:48.317069 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:48.317133 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:48.317144 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:48.358670 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:48.379656 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:48.380911 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:20:48.416178 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:48.410330 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:20:48.422256 systemd-networkd[900]: lo: Link UP Aug 13 00:20:48.422260 systemd-networkd[900]: lo: Gained carrier Aug 13 00:20:48.424044 systemd-networkd[900]: Enumeration completed Aug 13 00:20:48.428226 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:48.428230 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:20:48.435843 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:20:48.455785 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:20:48.466980 systemd[1]: Reached target network.target - Network. Aug 13 00:20:48.535657 kernel: mlx5_core 5bb3:00:02.0 enP23475s1: Link up Aug 13 00:20:48.581660 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: Data path switched to VF: enP23475s1 Aug 13 00:20:48.582425 systemd-networkd[900]: enP23475s1: Link UP Aug 13 00:20:48.582676 systemd-networkd[900]: eth0: Link UP Aug 13 00:20:48.583063 systemd-networkd[900]: eth0: Gained carrier Aug 13 00:20:48.583072 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:48.594870 systemd-networkd[900]: enP23475s1: Gained carrier Aug 13 00:20:48.619696 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:20:49.429035 ignition[909]: Ignition 2.19.0 Aug 13 00:20:49.429048 ignition[909]: Stage: fetch-offline Aug 13 00:20:49.433676 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:49.429084 ignition[909]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:49.429092 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:49.429183 ignition[909]: parsed url from cmdline: "" Aug 13 00:20:49.429185 ignition[909]: no config URL provided Aug 13 00:20:49.429190 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:49.429196 ignition[909]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:49.429201 ignition[909]: failed to fetch config: resource requires networking Aug 13 00:20:49.469924 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:20:49.429594 ignition[909]: Ignition finished successfully Aug 13 00:20:49.488185 ignition[916]: Ignition 2.19.0 Aug 13 00:20:49.488192 ignition[916]: Stage: fetch Aug 13 00:20:49.488364 ignition[916]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:49.488375 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:49.488469 ignition[916]: parsed url from cmdline: "" Aug 13 00:20:49.488472 ignition[916]: no config URL provided Aug 13 00:20:49.488476 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:49.488483 ignition[916]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:49.488502 ignition[916]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:20:49.588355 ignition[916]: GET result: OK Aug 13 00:20:49.588415 ignition[916]: config has been read from IMDS userdata Aug 13 00:20:49.588460 ignition[916]: parsing config with SHA512: e3e691810113ff7ca08365a1dc7193dbbc33817ae7ad9de2e062de7c82532e657ee07ef5d0298bfdf9fb975699924452e135786d09616f342247fb1d171e8f82 Aug 13 00:20:49.592045 unknown[916]: fetched base config from "system" Aug 13 00:20:49.592416 ignition[916]: fetch: fetch complete Aug 13 00:20:49.592051 unknown[916]: fetched base config from "system" Aug 13 00:20:49.592420 ignition[916]: fetch: fetch passed Aug 13 00:20:49.592056 unknown[916]: fetched user config from "azure" Aug 13 00:20:49.592460 ignition[916]: Ignition finished successfully Aug 13 00:20:49.597381 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:20:49.615963 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:20:49.632165 ignition[923]: Ignition 2.19.0 Aug 13 00:20:49.637468 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:20:49.632172 ignition[923]: Stage: kargs Aug 13 00:20:49.632369 ignition[923]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:49.632378 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:49.633495 ignition[923]: kargs: kargs passed Aug 13 00:20:49.666858 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:20:49.633553 ignition[923]: Ignition finished successfully Aug 13 00:20:49.684293 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:20:49.681783 ignition[930]: Ignition 2.19.0 Aug 13 00:20:49.690584 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:49.681789 ignition[930]: Stage: disks Aug 13 00:20:49.696870 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:20:49.681960 ignition[930]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:49.707554 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:20:49.681969 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:49.718492 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:20:49.683091 ignition[930]: disks: disks passed Aug 13 00:20:49.727313 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:20:49.683163 ignition[930]: Ignition finished successfully Aug 13 00:20:49.758892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:20:49.854688 systemd-fsck[938]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 00:20:49.865187 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:20:49.881869 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:20:49.937930 kernel: EXT4-fs (sda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:20:49.938387 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:20:49.943253 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:20:49.984722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:50.008666 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Aug 13 00:20:50.022202 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:50.022248 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:50.026342 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:50.026953 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:20:50.045657 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:50.055852 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:20:50.062514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:20:50.062545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:50.071906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:50.082349 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:20:50.102862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:20:50.477767 systemd-networkd[900]: eth0: Gained IPv6LL Aug 13 00:20:50.638268 coreos-metadata[957]: Aug 13 00:20:50.638 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:20:50.648278 coreos-metadata[957]: Aug 13 00:20:50.648 INFO Fetch successful Aug 13 00:20:50.654315 coreos-metadata[957]: Aug 13 00:20:50.653 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:20:50.666300 coreos-metadata[957]: Aug 13 00:20:50.666 INFO Fetch successful Aug 13 00:20:50.678325 coreos-metadata[957]: Aug 13 00:20:50.678 INFO wrote hostname ci-4081.3.5-a-c1c2bc5336 to /sysroot/etc/hostname Aug 13 00:20:50.687570 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:20:51.010287 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:20:51.043973 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:20:51.067892 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:20:51.091317 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:20:52.236829 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:52.250070 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:20:52.262760 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:20:52.281659 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:52.281132 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:20:52.307261 ignition[1067]: INFO : Ignition 2.19.0 Aug 13 00:20:52.307261 ignition[1067]: INFO : Stage: mount Aug 13 00:20:52.307261 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:52.307261 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:52.307261 ignition[1067]: INFO : mount: mount passed Aug 13 00:20:52.340785 ignition[1067]: INFO : Ignition finished successfully Aug 13 00:20:52.316431 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:20:52.345858 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:20:52.359660 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:20:52.379254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:52.408881 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Aug 13 00:20:52.425905 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:52.425957 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:52.430939 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:52.447661 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:52.450016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:52.480020 ignition[1095]: INFO : Ignition 2.19.0 Aug 13 00:20:52.485322 ignition[1095]: INFO : Stage: files Aug 13 00:20:52.485322 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:52.485322 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:52.485322 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:20:52.518770 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:20:52.518770 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:20:52.628529 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:20:52.637889 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:20:52.637889 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:20:52.629021 unknown[1095]: wrote ssh authorized keys file for user: core Aug 13 00:20:52.674367 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:20:52.684808 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:20:52.828546 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:20:53.228848 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:20:53.228848 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:20:53.608352 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:20:53.889353 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:53.889353 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:20:53.918159 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: files passed Aug 13 00:20:53.930319 ignition[1095]: INFO : Ignition finished successfully Aug 13 00:20:53.930200 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:20:53.977910 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:20:53.995813 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:20:54.010143 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:20:54.051498 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.051498 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.010232 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:20:54.082285 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.038603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:54.046942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:20:54.082919 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:20:54.126361 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:20:54.126501 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:20:54.138588 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:20:54.150986 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:20:54.162218 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:20:54.178158 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:20:54.200566 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:54.215873 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:20:54.235105 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:20:54.235219 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:20:54.247300 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:54.259993 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:54.272851 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:20:54.284709 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:20:54.284780 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:54.301017 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:20:54.312725 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:20:54.322820 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:20:54.333659 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:54.346169 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:54.358215 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:20:54.369968 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:54.382349 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:20:54.395074 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:20:54.405592 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:20:54.415460 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:20:54.415535 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:54.430569 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:54.442718 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:54.455099 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:20:54.458662 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:54.468363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:20:54.468434 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:54.486964 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:20:54.487017 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:54.498876 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:20:54.498933 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:20:54.511220 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:20:54.511265 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:20:54.569738 ignition[1149]: INFO : Ignition 2.19.0 Aug 13 00:20:54.569738 ignition[1149]: INFO : Stage: umount Aug 13 00:20:54.569738 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:54.569738 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:54.569738 ignition[1149]: INFO : umount: umount passed Aug 13 00:20:54.569738 ignition[1149]: INFO : Ignition finished successfully Aug 13 00:20:54.536838 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:20:54.548569 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:20:54.548650 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:54.574837 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:20:54.589488 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:20:54.589566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:54.602744 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:20:54.602804 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:54.620597 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:20:54.620722 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:20:54.630462 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:20:54.630584 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:20:54.646057 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:20:54.646123 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:20:54.658431 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:20:54.658496 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:20:54.673758 systemd[1]: Stopped target network.target - Network. Aug 13 00:20:54.684938 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:20:54.685025 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:54.697573 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:20:54.708353 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:20:54.711663 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:54.720316 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:20:54.731722 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:20:54.742159 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:20:54.742229 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:54.753321 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:20:54.753366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:54.759772 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:20:54.759825 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:20:54.765696 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:20:54.765741 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:54.783377 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:20:54.794022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:20:54.805697 systemd-networkd[900]: eth0: DHCPv6 lease lost Aug 13 00:20:54.807280 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:20:54.811269 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:20:54.811422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:20:54.828160 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:20:54.828268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:20:55.036333 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: Data path switched from VF: enP23475s1 Aug 13 00:20:54.841937 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:20:54.841989 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:54.877852 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:20:54.887611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:20:54.887712 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:54.899154 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:20:54.899209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:54.909954 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:20:54.910004 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:54.921400 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:20:54.921450 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:54.940624 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:54.993507 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:20:54.993690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:55.006813 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:20:55.006873 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:55.017313 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:20:55.017354 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:55.036593 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:20:55.036669 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:55.054425 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:20:55.054492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:55.067409 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:55.067465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:55.094843 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:20:55.110350 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:20:55.110432 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:55.123265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:55.123323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:55.136002 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:20:55.136109 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:20:55.146152 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:20:55.146249 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:20:55.158387 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:20:55.158468 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:20:55.172910 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:20:55.183824 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:20:55.341621 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Aug 13 00:20:55.183925 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:55.212914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:20:55.226800 systemd[1]: Switching root. Aug 13 00:20:55.356377 systemd-journald[217]: Journal stopped Aug 13 00:20:44.317312 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:20:44.317334 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:20:44.317342 kernel: KASLR enabled Aug 13 00:20:44.317348 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 13 00:20:44.317356 kernel: printk: bootconsole [pl11] enabled Aug 13 00:20:44.317361 kernel: efi: EFI v2.7 by EDK II Aug 13 00:20:44.317369 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Aug 13 00:20:44.317375 kernel: random: crng init done Aug 13 00:20:44.317381 kernel: ACPI: Early table checksum verification disabled Aug 13 00:20:44.317387 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Aug 13 00:20:44.317393 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317399 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317407 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:20:44.317413 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317420 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317427 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317433 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317441 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317447 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317454 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 13 00:20:44.317460 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:20:44.317467 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 13 00:20:44.317473 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 13 00:20:44.317479 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Aug 13 00:20:44.317486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Aug 13 00:20:44.317492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Aug 13 00:20:44.317498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Aug 13 00:20:44.317505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Aug 13 00:20:44.317513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Aug 13 00:20:44.317519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Aug 13 00:20:44.317526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Aug 13 00:20:44.317532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Aug 13 00:20:44.317538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Aug 13 00:20:44.317545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Aug 13 00:20:44.317581 kernel: NUMA: NODE_DATA [mem 0x1bf7ed800-0x1bf7f2fff] Aug 13 00:20:44.317588 kernel: Zone ranges: Aug 13 00:20:44.317594 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 13 00:20:44.317600 kernel: DMA32 empty Aug 13 00:20:44.317607 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:20:44.317613 kernel: Movable zone start for each node Aug 13 00:20:44.317624 kernel: Early memory node ranges Aug 13 00:20:44.317631 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 13 00:20:44.317638 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Aug 13 00:20:44.317645 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Aug 13 00:20:44.317652 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Aug 13 00:20:44.317660 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Aug 13 00:20:44.317667 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Aug 13 00:20:44.317673 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:20:44.317680 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:20:44.317687 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 13 00:20:44.317693 kernel: psci: probing for conduit method from ACPI. Aug 13 00:20:44.317700 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:20:44.317707 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:20:44.317713 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 13 00:20:44.317720 kernel: psci: SMC Calling Convention v1.4 Aug 13 00:20:44.317727 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Aug 13 00:20:44.317734 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Aug 13 00:20:44.317742 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:20:44.317749 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:20:44.317756 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:20:44.317762 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:20:44.317769 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:20:44.317776 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:20:44.317783 kernel: CPU features: detected: Spectre-BHB Aug 13 00:20:44.317789 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:20:44.317796 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:20:44.317803 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:20:44.317810 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 13 00:20:44.317818 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:20:44.317825 kernel: alternatives: applying boot alternatives Aug 13 00:20:44.317833 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:20:44.317840 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:20:44.317847 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:20:44.317854 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:20:44.317861 kernel: Fallback order for Node 0: 0 Aug 13 00:20:44.317867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 13 00:20:44.317874 kernel: Policy zone: Normal Aug 13 00:20:44.317881 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:20:44.317888 kernel: software IO TLB: area num 2. Aug 13 00:20:44.317896 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Aug 13 00:20:44.317903 kernel: Memory: 3982620K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211540K reserved, 0K cma-reserved) Aug 13 00:20:44.317910 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:20:44.317916 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:20:44.317924 kernel: rcu: RCU event tracing is enabled. Aug 13 00:20:44.317931 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:20:44.317938 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:20:44.317945 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:20:44.317952 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:20:44.317958 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:20:44.317965 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:20:44.317973 kernel: GICv3: 960 SPIs implemented Aug 13 00:20:44.317980 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:20:44.317986 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:20:44.317993 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:20:44.318000 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 13 00:20:44.318007 kernel: ITS: No ITS available, not enabling LPIs Aug 13 00:20:44.318013 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:20:44.318021 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:20:44.318027 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:20:44.318034 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:20:44.318041 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:20:44.318050 kernel: Console: colour dummy device 80x25 Aug 13 00:20:44.318057 kernel: printk: console [tty1] enabled Aug 13 00:20:44.318064 kernel: ACPI: Core revision 20230628 Aug 13 00:20:44.318071 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:20:44.318078 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:20:44.318085 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:20:44.318092 kernel: landlock: Up and running. Aug 13 00:20:44.318099 kernel: SELinux: Initializing. Aug 13 00:20:44.318106 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.318113 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.318121 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:20:44.318128 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:20:44.318136 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 13 00:20:44.318142 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Aug 13 00:20:44.318149 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 00:20:44.318156 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:20:44.318164 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:20:44.318177 kernel: Remapping and enabling EFI services. Aug 13 00:20:44.318184 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:20:44.318191 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:20:44.318199 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 13 00:20:44.318207 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:20:44.318215 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:20:44.318222 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:20:44.318229 kernel: SMP: Total of 2 processors activated. Aug 13 00:20:44.318237 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:20:44.318246 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 13 00:20:44.318253 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:20:44.318261 kernel: CPU features: detected: CRC32 instructions Aug 13 00:20:44.318268 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:20:44.318275 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:20:44.318282 kernel: CPU features: detected: Privileged Access Never Aug 13 00:20:44.318290 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:20:44.318297 kernel: alternatives: applying system-wide alternatives Aug 13 00:20:44.318304 kernel: devtmpfs: initialized Aug 13 00:20:44.318313 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:20:44.318320 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:20:44.318328 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:20:44.318335 kernel: SMBIOS 3.1.0 present. Aug 13 00:20:44.318342 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 00:20:44.318350 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:20:44.318357 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:20:44.318364 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:20:44.318372 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:20:44.318380 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:20:44.318388 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Aug 13 00:20:44.318395 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:20:44.318402 kernel: cpuidle: using governor menu Aug 13 00:20:44.318409 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:20:44.318417 kernel: ASID allocator initialised with 32768 entries Aug 13 00:20:44.318424 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:20:44.318431 kernel: Serial: AMBA PL011 UART driver Aug 13 00:20:44.318439 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:20:44.318447 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:20:44.318455 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:20:44.318462 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:20:44.318470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:20:44.318477 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:20:44.318484 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:20:44.318492 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:20:44.318499 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:20:44.318506 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:20:44.318515 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:20:44.318522 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:20:44.318530 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:20:44.318537 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:20:44.318544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:20:44.323123 kernel: ACPI: Interpreter enabled Aug 13 00:20:44.323133 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:20:44.323141 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:20:44.323148 kernel: printk: console [ttyAMA0] enabled Aug 13 00:20:44.323163 kernel: printk: bootconsole [pl11] disabled Aug 13 00:20:44.323171 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 13 00:20:44.323178 kernel: iommu: Default domain type: Translated Aug 13 00:20:44.323186 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:20:44.323193 kernel: efivars: Registered efivars operations Aug 13 00:20:44.323200 kernel: vgaarb: loaded Aug 13 00:20:44.323207 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:20:44.323215 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:20:44.323222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:20:44.323231 kernel: pnp: PnP ACPI init Aug 13 00:20:44.323239 kernel: pnp: PnP ACPI: found 0 devices Aug 13 00:20:44.323246 kernel: NET: Registered PF_INET protocol family Aug 13 00:20:44.323254 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:20:44.323262 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:20:44.323269 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:20:44.323276 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:20:44.323284 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:20:44.323291 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:20:44.323300 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.323308 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:20:44.323315 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:20:44.323323 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:20:44.323330 kernel: kvm [1]: HYP mode not available Aug 13 00:20:44.323337 kernel: Initialise system trusted keyrings Aug 13 00:20:44.323345 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:20:44.323352 kernel: Key type asymmetric registered Aug 13 00:20:44.323359 kernel: Asymmetric key parser 'x509' registered Aug 13 00:20:44.323368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:20:44.323375 kernel: io scheduler mq-deadline registered Aug 13 00:20:44.323383 kernel: io scheduler kyber registered Aug 13 00:20:44.323390 kernel: io scheduler bfq registered Aug 13 00:20:44.323397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:20:44.323405 kernel: thunder_xcv, ver 1.0 Aug 13 00:20:44.323412 kernel: thunder_bgx, ver 1.0 Aug 13 00:20:44.323419 kernel: nicpf, ver 1.0 Aug 13 00:20:44.323426 kernel: nicvf, ver 1.0 Aug 13 00:20:44.323627 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:20:44.323708 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:20:43 UTC (1755044443) Aug 13 00:20:44.323719 kernel: efifb: probing for efifb Aug 13 00:20:44.323726 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:20:44.323734 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:20:44.323741 kernel: efifb: scrolling: redraw Aug 13 00:20:44.323749 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:20:44.323756 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:20:44.323766 kernel: fb0: EFI VGA frame buffer device Aug 13 00:20:44.323774 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 13 00:20:44.323781 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:20:44.323789 kernel: No ACPI PMU IRQ for CPU0 Aug 13 00:20:44.323796 kernel: No ACPI PMU IRQ for CPU1 Aug 13 00:20:44.323803 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 13 00:20:44.323811 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:20:44.323818 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:20:44.323825 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:20:44.323835 kernel: Segment Routing with IPv6 Aug 13 00:20:44.323842 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:20:44.323849 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:20:44.323857 kernel: Key type dns_resolver registered Aug 13 00:20:44.323864 kernel: registered taskstats version 1 Aug 13 00:20:44.323871 kernel: Loading compiled-in X.509 certificates Aug 13 00:20:44.323879 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:20:44.323886 kernel: Key type .fscrypt registered Aug 13 00:20:44.323893 kernel: Key type fscrypt-provisioning registered Aug 13 00:20:44.323902 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:20:44.323909 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:20:44.323917 kernel: ima: No architecture policies found Aug 13 00:20:44.323924 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:20:44.323932 kernel: clk: Disabling unused clocks Aug 13 00:20:44.323939 kernel: Freeing unused kernel memory: 39424K Aug 13 00:20:44.323946 kernel: Run /init as init process Aug 13 00:20:44.323958 kernel: with arguments: Aug 13 00:20:44.323966 kernel: /init Aug 13 00:20:44.323974 kernel: with environment: Aug 13 00:20:44.323982 kernel: HOME=/ Aug 13 00:20:44.323989 kernel: TERM=linux Aug 13 00:20:44.323997 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:20:44.324006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:20:44.324016 systemd[1]: Detected virtualization microsoft. Aug 13 00:20:44.324024 systemd[1]: Detected architecture arm64. Aug 13 00:20:44.324031 systemd[1]: Running in initrd. Aug 13 00:20:44.324041 systemd[1]: No hostname configured, using default hostname. Aug 13 00:20:44.324049 systemd[1]: Hostname set to . Aug 13 00:20:44.324057 systemd[1]: Initializing machine ID from random generator. Aug 13 00:20:44.324065 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:20:44.324073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:44.324081 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:44.324090 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:20:44.324099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:20:44.324108 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:20:44.324117 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:20:44.324126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:20:44.324134 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:20:44.324142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:44.324150 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:44.324159 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:20:44.324167 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:20:44.324175 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:20:44.324183 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:20:44.324191 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:44.324199 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:44.324207 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:20:44.324215 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:20:44.324223 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:44.324232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:44.324240 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:44.324248 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:20:44.324256 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:20:44.324265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:20:44.324273 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:20:44.324281 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:20:44.324289 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:20:44.324297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:20:44.324324 systemd-journald[217]: Collecting audit messages is disabled. Aug 13 00:20:44.324344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:44.324353 systemd-journald[217]: Journal started Aug 13 00:20:44.324373 systemd-journald[217]: Runtime Journal (/run/log/journal/866183bbb6a14c2ab62a9eb9c46031da) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:20:44.325102 systemd-modules-load[218]: Inserted module 'overlay' Aug 13 00:20:44.348402 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:20:44.351985 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:44.373867 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:20:44.373906 kernel: Bridge firewalling registered Aug 13 00:20:44.377316 systemd-modules-load[218]: Inserted module 'br_netfilter' Aug 13 00:20:44.378829 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:44.389831 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:20:44.401717 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:44.412451 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:44.432831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:44.454111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:20:44.466759 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:20:44.495726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:20:44.506935 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:44.519428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:44.526265 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:20:44.543507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:44.575956 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:20:44.584730 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:20:44.609125 dracut-cmdline[248]: dracut-dracut-053 Aug 13 00:20:44.612809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:20:44.630591 dracut-cmdline[248]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:20:44.662441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:44.667537 systemd-resolved[250]: Positive Trust Anchors: Aug 13 00:20:44.667562 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:20:44.667595 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:20:44.669807 systemd-resolved[250]: Defaulting to hostname 'linux'. Aug 13 00:20:44.678218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:20:44.684990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:44.783569 kernel: SCSI subsystem initialized Aug 13 00:20:44.790580 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:20:44.801584 kernel: iscsi: registered transport (tcp) Aug 13 00:20:44.821273 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:20:44.821340 kernel: QLogic iSCSI HBA Driver Aug 13 00:20:44.861428 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:44.883710 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:20:44.915817 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:20:44.915862 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:20:44.922929 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:20:44.972575 kernel: raid6: neonx8 gen() 15780 MB/s Aug 13 00:20:44.992571 kernel: raid6: neonx4 gen() 15629 MB/s Aug 13 00:20:45.012566 kernel: raid6: neonx2 gen() 13236 MB/s Aug 13 00:20:45.033568 kernel: raid6: neonx1 gen() 10478 MB/s Aug 13 00:20:45.053561 kernel: raid6: int64x8 gen() 6960 MB/s Aug 13 00:20:45.073565 kernel: raid6: int64x4 gen() 7350 MB/s Aug 13 00:20:45.094566 kernel: raid6: int64x2 gen() 6133 MB/s Aug 13 00:20:45.118516 kernel: raid6: int64x1 gen() 5058 MB/s Aug 13 00:20:45.118577 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Aug 13 00:20:45.143275 kernel: raid6: .... xor() 11903 MB/s, rmw enabled Aug 13 00:20:45.143324 kernel: raid6: using neon recovery algorithm Aug 13 00:20:45.152563 kernel: xor: measuring software checksum speed Aug 13 00:20:45.159525 kernel: 8regs : 18696 MB/sec Aug 13 00:20:45.159559 kernel: 32regs : 19603 MB/sec Aug 13 00:20:45.162935 kernel: arm64_neon : 27132 MB/sec Aug 13 00:20:45.167169 kernel: xor: using function: arm64_neon (27132 MB/sec) Aug 13 00:20:45.217573 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:20:45.228425 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:45.243683 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:45.266114 systemd-udevd[435]: Using default interface naming scheme 'v255'. Aug 13 00:20:45.271835 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:45.288764 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:20:45.308095 dracut-pre-trigger[441]: rd.md=0: removing MD RAID activation Aug 13 00:20:45.334941 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:45.349795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:20:45.386378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:45.408774 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:20:45.435587 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:45.443426 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:45.463784 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:45.480424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:20:45.503788 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:20:45.519235 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:45.537567 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 00:20:45.519393 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:45.530898 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:45.593744 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:20:45.593773 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:20:45.593784 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:20:45.593793 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:20:45.549618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:45.611446 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:20:45.549840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:45.636743 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Aug 13 00:20:45.636776 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Aug 13 00:20:45.578654 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:45.659890 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:20:45.660040 kernel: PTP clock support registered Aug 13 00:20:45.645731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:45.705636 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:20:45.705662 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:20:45.705672 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:20:45.705681 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:20:45.705690 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:20:45.670396 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:45.686000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:45.360084 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:20:45.360101 kernel: scsi host0: storvsc_host_t Aug 13 00:20:45.360230 systemd-journald[217]: Time jumped backwards, rotating. Aug 13 00:20:45.360279 kernel: scsi host1: storvsc_host_t Aug 13 00:20:45.686107 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:45.404356 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:20:45.404398 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: VF slot 1 added Aug 13 00:20:45.404565 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:20:45.334289 systemd-resolved[250]: Clock change detected. Flushing caches. Aug 13 00:20:45.358649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:45.443395 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:20:45.443419 kernel: hv_pci 7b6b2293-5bb3-49e9-9276-be969652f9bc: PCI VMBus probing: Using version 0x10004 Aug 13 00:20:45.443604 kernel: hv_pci 7b6b2293-5bb3-49e9-9276-be969652f9bc: PCI host bridge to bus 5bb3:00 Aug 13 00:20:45.396283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:45.478236 kernel: pci_bus 5bb3:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 13 00:20:45.478422 kernel: pci_bus 5bb3:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:20:45.478511 kernel: pci 5bb3:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 13 00:20:45.429823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:20:45.505353 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:20:45.505553 kernel: pci 5bb3:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:20:45.505580 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:20:45.505589 kernel: pci 5bb3:00:02.0: enabling Extended Tags Aug 13 00:20:45.509795 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:20:45.514657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:45.548356 kernel: pci 5bb3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5bb3:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 13 00:20:45.548605 kernel: pci_bus 5bb3:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:20:45.549225 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:20:45.559824 kernel: pci 5bb3:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:20:45.559978 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:20:45.568947 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:20:45.576176 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:20:45.576340 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:20:45.585652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:45.590658 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:20:45.631096 kernel: mlx5_core 5bb3:00:02.0: enabling device (0000 -> 0002) Aug 13 00:20:45.644676 kernel: mlx5_core 5bb3:00:02.0: firmware version: 16.30.1284 Aug 13 00:20:45.851243 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: VF registering: eth1 Aug 13 00:20:45.851452 kernel: mlx5_core 5bb3:00:02.0 eth1: joined to eth0 Aug 13 00:20:45.858800 kernel: mlx5_core 5bb3:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Aug 13 00:20:45.870658 kernel: mlx5_core 5bb3:00:02.0 enP23475s1: renamed from eth1 Aug 13 00:20:46.016849 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 00:20:46.103756 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (492) Aug 13 00:20:46.118378 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 00:20:46.144825 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (498) Aug 13 00:20:46.160740 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 00:20:46.178649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 00:20:46.185867 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 00:20:46.217925 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:20:46.246309 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:46.253652 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:46.264673 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:47.265666 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:20:47.266719 disk-uuid[607]: The operation has completed successfully. Aug 13 00:20:47.338270 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:20:47.338382 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:20:47.367834 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:20:47.380886 sh[720]: Success Aug 13 00:20:47.409667 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:20:47.731041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:20:47.740767 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:20:47.752058 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:20:47.792786 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:20:47.792842 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:47.800746 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:20:47.805783 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:20:47.810109 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:20:48.231620 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:20:48.237280 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:20:48.257031 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:20:48.264817 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:20:48.317069 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:48.317133 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:48.317144 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:48.358670 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:48.379656 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:48.380911 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:20:48.416178 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:48.410330 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:20:48.422256 systemd-networkd[900]: lo: Link UP Aug 13 00:20:48.422260 systemd-networkd[900]: lo: Gained carrier Aug 13 00:20:48.424044 systemd-networkd[900]: Enumeration completed Aug 13 00:20:48.428226 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:48.428230 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:20:48.435843 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:20:48.455785 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:20:48.466980 systemd[1]: Reached target network.target - Network. Aug 13 00:20:48.535657 kernel: mlx5_core 5bb3:00:02.0 enP23475s1: Link up Aug 13 00:20:48.581660 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: Data path switched to VF: enP23475s1 Aug 13 00:20:48.582425 systemd-networkd[900]: enP23475s1: Link UP Aug 13 00:20:48.582676 systemd-networkd[900]: eth0: Link UP Aug 13 00:20:48.583063 systemd-networkd[900]: eth0: Gained carrier Aug 13 00:20:48.583072 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:48.594870 systemd-networkd[900]: enP23475s1: Gained carrier Aug 13 00:20:48.619696 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:20:49.429035 ignition[909]: Ignition 2.19.0 Aug 13 00:20:49.429048 ignition[909]: Stage: fetch-offline Aug 13 00:20:49.433676 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:49.429084 ignition[909]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:49.429092 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:49.429183 ignition[909]: parsed url from cmdline: "" Aug 13 00:20:49.429185 ignition[909]: no config URL provided Aug 13 00:20:49.429190 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:49.429196 ignition[909]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:49.429201 ignition[909]: failed to fetch config: resource requires networking Aug 13 00:20:49.469924 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:20:49.429594 ignition[909]: Ignition finished successfully Aug 13 00:20:49.488185 ignition[916]: Ignition 2.19.0 Aug 13 00:20:49.488192 ignition[916]: Stage: fetch Aug 13 00:20:49.488364 ignition[916]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:49.488375 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:49.488469 ignition[916]: parsed url from cmdline: "" Aug 13 00:20:49.488472 ignition[916]: no config URL provided Aug 13 00:20:49.488476 ignition[916]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:49.488483 ignition[916]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:49.488502 ignition[916]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:20:49.588355 ignition[916]: GET result: OK Aug 13 00:20:49.588415 ignition[916]: config has been read from IMDS userdata Aug 13 00:20:49.588460 ignition[916]: parsing config with SHA512: e3e691810113ff7ca08365a1dc7193dbbc33817ae7ad9de2e062de7c82532e657ee07ef5d0298bfdf9fb975699924452e135786d09616f342247fb1d171e8f82 Aug 13 00:20:49.592045 unknown[916]: fetched base config from "system" Aug 13 00:20:49.592416 ignition[916]: fetch: fetch complete Aug 13 00:20:49.592051 unknown[916]: fetched base config from "system" Aug 13 00:20:49.592420 ignition[916]: fetch: fetch passed Aug 13 00:20:49.592056 unknown[916]: fetched user config from "azure" Aug 13 00:20:49.592460 ignition[916]: Ignition finished successfully Aug 13 00:20:49.597381 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:20:49.615963 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:20:49.632165 ignition[923]: Ignition 2.19.0 Aug 13 00:20:49.637468 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:20:49.632172 ignition[923]: Stage: kargs Aug 13 00:20:49.632369 ignition[923]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:49.632378 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:49.633495 ignition[923]: kargs: kargs passed Aug 13 00:20:49.666858 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:20:49.633553 ignition[923]: Ignition finished successfully Aug 13 00:20:49.684293 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:20:49.681783 ignition[930]: Ignition 2.19.0 Aug 13 00:20:49.690584 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:49.681789 ignition[930]: Stage: disks Aug 13 00:20:49.696870 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:20:49.681960 ignition[930]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:49.707554 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:20:49.681969 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:49.718492 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:20:49.683091 ignition[930]: disks: disks passed Aug 13 00:20:49.727313 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:20:49.683163 ignition[930]: Ignition finished successfully Aug 13 00:20:49.758892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:20:49.854688 systemd-fsck[938]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 00:20:49.865187 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:20:49.881869 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:20:49.937930 kernel: EXT4-fs (sda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:20:49.938387 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:20:49.943253 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:20:49.984722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:50.008666 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Aug 13 00:20:50.022202 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:50.022248 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:50.026342 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:50.026953 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:20:50.045657 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:50.055852 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:20:50.062514 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:20:50.062545 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:50.071906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:50.082349 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:20:50.102862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:20:50.477767 systemd-networkd[900]: eth0: Gained IPv6LL Aug 13 00:20:50.638268 coreos-metadata[957]: Aug 13 00:20:50.638 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:20:50.648278 coreos-metadata[957]: Aug 13 00:20:50.648 INFO Fetch successful Aug 13 00:20:50.654315 coreos-metadata[957]: Aug 13 00:20:50.653 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:20:50.666300 coreos-metadata[957]: Aug 13 00:20:50.666 INFO Fetch successful Aug 13 00:20:50.678325 coreos-metadata[957]: Aug 13 00:20:50.678 INFO wrote hostname ci-4081.3.5-a-c1c2bc5336 to /sysroot/etc/hostname Aug 13 00:20:50.687570 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:20:51.010287 initrd-setup-root[978]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:20:51.043973 initrd-setup-root[985]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:20:51.067892 initrd-setup-root[992]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:20:51.091317 initrd-setup-root[999]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:20:52.236829 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:52.250070 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:20:52.262760 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:20:52.281659 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:52.281132 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:20:52.307261 ignition[1067]: INFO : Ignition 2.19.0 Aug 13 00:20:52.307261 ignition[1067]: INFO : Stage: mount Aug 13 00:20:52.307261 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:52.307261 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:52.307261 ignition[1067]: INFO : mount: mount passed Aug 13 00:20:52.340785 ignition[1067]: INFO : Ignition finished successfully Aug 13 00:20:52.316431 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:20:52.345858 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:20:52.359660 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:20:52.379254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:52.408881 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1078) Aug 13 00:20:52.425905 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:52.425957 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:52.430939 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:20:52.447661 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:20:52.450016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:52.480020 ignition[1095]: INFO : Ignition 2.19.0 Aug 13 00:20:52.485322 ignition[1095]: INFO : Stage: files Aug 13 00:20:52.485322 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:52.485322 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:52.485322 ignition[1095]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:20:52.518770 ignition[1095]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:20:52.518770 ignition[1095]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:20:52.628529 ignition[1095]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:20:52.637889 ignition[1095]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:20:52.637889 ignition[1095]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:20:52.629021 unknown[1095]: wrote ssh authorized keys file for user: core Aug 13 00:20:52.674367 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:20:52.684808 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:20:52.828546 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:20:53.228848 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:20:53.228848 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:53.249486 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:20:53.608352 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:20:53.889353 ignition[1095]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:53.889353 ignition[1095]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:20:53.918159 ignition[1095]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:53.930319 ignition[1095]: INFO : files: files passed Aug 13 00:20:53.930319 ignition[1095]: INFO : Ignition finished successfully Aug 13 00:20:53.930200 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:20:53.977910 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:20:53.995813 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:20:54.010143 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:20:54.051498 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.051498 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.010232 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:20:54.082285 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:54.038603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:54.046942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:20:54.082919 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:20:54.126361 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:20:54.126501 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:20:54.138588 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:20:54.150986 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:20:54.162218 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:20:54.178158 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:20:54.200566 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:54.215873 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:20:54.235105 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:20:54.235219 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:20:54.247300 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:54.259993 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:54.272851 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:20:54.284709 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:20:54.284780 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:54.301017 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:20:54.312725 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:20:54.322820 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:20:54.333659 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:54.346169 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:54.358215 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:20:54.369968 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:54.382349 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:20:54.395074 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:20:54.405592 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:20:54.415460 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:20:54.415535 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:54.430569 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:54.442718 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:54.455099 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:20:54.458662 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:54.468363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:20:54.468434 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:54.486964 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:20:54.487017 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:54.498876 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:20:54.498933 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:20:54.511220 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:20:54.511265 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:20:54.569738 ignition[1149]: INFO : Ignition 2.19.0 Aug 13 00:20:54.569738 ignition[1149]: INFO : Stage: umount Aug 13 00:20:54.569738 ignition[1149]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:54.569738 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:20:54.569738 ignition[1149]: INFO : umount: umount passed Aug 13 00:20:54.569738 ignition[1149]: INFO : Ignition finished successfully Aug 13 00:20:54.536838 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:20:54.548569 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:20:54.548650 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:54.574837 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:20:54.589488 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:20:54.589566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:54.602744 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:20:54.602804 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:54.620597 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:20:54.620722 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:20:54.630462 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:20:54.630584 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:20:54.646057 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:20:54.646123 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:20:54.658431 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:20:54.658496 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:20:54.673758 systemd[1]: Stopped target network.target - Network. Aug 13 00:20:54.684938 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:20:54.685025 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:54.697573 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:20:54.708353 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:20:54.711663 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:54.720316 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:20:54.731722 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:20:54.742159 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:20:54.742229 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:54.753321 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:20:54.753366 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:54.759772 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:20:54.759825 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:20:54.765696 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:20:54.765741 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:54.783377 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:20:54.794022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:20:54.805697 systemd-networkd[900]: eth0: DHCPv6 lease lost Aug 13 00:20:54.807280 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:20:54.811269 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:20:54.811422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:20:54.828160 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:20:54.828268 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:20:55.036333 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: Data path switched from VF: enP23475s1 Aug 13 00:20:54.841937 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:20:54.841989 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:54.877852 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:20:54.887611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:20:54.887712 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:54.899154 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:20:54.899209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:54.909954 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:20:54.910004 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:54.921400 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:20:54.921450 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:54.940624 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:54.993507 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:20:54.993690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:55.006813 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:20:55.006873 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:55.017313 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:20:55.017354 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:55.036593 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:20:55.036669 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:55.054425 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:20:55.054492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:55.067409 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:55.067465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:55.094843 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:20:55.110350 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:20:55.110432 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:55.123265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:55.123323 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:55.136002 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:20:55.136109 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:20:55.146152 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:20:55.146249 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:20:55.158387 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:20:55.158468 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:20:55.172910 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:20:55.183824 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:20:55.341621 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Aug 13 00:20:55.183925 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:55.212914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:20:55.226800 systemd[1]: Switching root. Aug 13 00:20:55.356377 systemd-journald[217]: Journal stopped Aug 13 00:21:02.067733 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:21:02.067756 kernel: SELinux: policy capability open_perms=1 Aug 13 00:21:02.067766 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:21:02.067773 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:21:02.067783 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:21:02.067790 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:21:02.067799 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:21:02.067807 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:21:02.067815 kernel: audit: type=1403 audit(1755044456.567:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:21:02.067825 systemd[1]: Successfully loaded SELinux policy in 193.943ms. Aug 13 00:21:02.067836 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.239ms. Aug 13 00:21:02.067846 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:21:02.067857 systemd[1]: Detected virtualization microsoft. Aug 13 00:21:02.067865 systemd[1]: Detected architecture arm64. Aug 13 00:21:02.067875 systemd[1]: Detected first boot. Aug 13 00:21:02.067886 systemd[1]: Hostname set to . Aug 13 00:21:02.067895 systemd[1]: Initializing machine ID from random generator. Aug 13 00:21:02.067904 zram_generator::config[1189]: No configuration found. Aug 13 00:21:02.067914 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:21:02.067923 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:21:02.067932 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:21:02.067941 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:21:02.067952 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:21:02.067961 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:21:02.067971 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:21:02.067980 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:21:02.067989 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:21:02.067998 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:21:02.068007 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:21:02.068018 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:21:02.068028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:21:02.068038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:21:02.068047 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:21:02.068057 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:21:02.068067 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:21:02.068077 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:21:02.068086 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:21:02.068097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:21:02.068106 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:21:02.068115 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:21:02.068127 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:21:02.068137 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:21:02.068146 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:21:02.068155 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:21:02.068165 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:21:02.068175 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:21:02.068185 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:21:02.068194 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:21:02.068203 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:21:02.068212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:21:02.068222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:21:02.068233 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:21:02.068242 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:21:02.068252 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:21:02.068263 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:21:02.068273 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:21:02.068282 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:21:02.068291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:21:02.068303 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:21:02.068313 systemd[1]: Reached target machines.target - Containers. Aug 13 00:21:02.068323 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:21:02.068332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:02.068342 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:21:02.068351 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:21:02.068361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:02.068370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:21:02.068381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:02.068391 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:21:02.068400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:02.068410 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:21:02.068420 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:21:02.068430 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:21:02.068439 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:21:02.068449 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:21:02.068460 kernel: fuse: init (API version 7.39) Aug 13 00:21:02.068470 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:21:02.068479 kernel: loop: module loaded Aug 13 00:21:02.068488 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:21:02.068498 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:21:02.068507 kernel: ACPI: bus type drm_connector registered Aug 13 00:21:02.068516 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:21:02.068525 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:21:02.068535 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:21:02.068545 systemd[1]: Stopped verity-setup.service. Aug 13 00:21:02.068568 systemd-journald[1285]: Collecting audit messages is disabled. Aug 13 00:21:02.068587 systemd-journald[1285]: Journal started Aug 13 00:21:02.068609 systemd-journald[1285]: Runtime Journal (/run/log/journal/63ac7d73e0a24c578d9c926d3d9a9b0c) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:21:00.973986 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:21:01.125346 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:21:01.125734 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:21:01.126039 systemd[1]: systemd-journald.service: Consumed 3.213s CPU time. Aug 13 00:21:02.081821 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:21:02.082674 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:21:02.088785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:21:02.095027 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:21:02.100501 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:21:02.106840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:21:02.113706 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:21:02.119534 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:21:02.128211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:21:02.135510 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:21:02.135659 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:21:02.142491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:02.142618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:02.149364 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:21:02.149506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:21:02.156012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:02.156162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:02.163323 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:21:02.163445 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:21:02.170161 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:02.170280 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:02.176854 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:21:02.183430 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:21:02.190905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:21:02.198206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:21:02.215293 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:21:02.226728 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:21:02.235887 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:21:02.242581 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:21:02.242719 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:21:02.249962 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:21:02.260782 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:21:02.268374 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:21:02.274283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:02.295856 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:21:02.303088 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:21:02.309554 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:21:02.310698 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:21:02.317036 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:21:02.319249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:21:02.327928 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:21:02.337766 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:21:02.352878 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:21:02.365170 systemd-journald[1285]: Time spent on flushing to /var/log/journal/63ac7d73e0a24c578d9c926d3d9a9b0c is 62.920ms for 895 entries. Aug 13 00:21:02.365170 systemd-journald[1285]: System Journal (/var/log/journal/63ac7d73e0a24c578d9c926d3d9a9b0c) is 11.8M, max 2.6G, 2.6G free. Aug 13 00:21:02.534139 systemd-journald[1285]: Received client request to flush runtime journal. Aug 13 00:21:02.534229 systemd-journald[1285]: /var/log/journal/63ac7d73e0a24c578d9c926d3d9a9b0c/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Aug 13 00:21:02.534254 kernel: loop0: detected capacity change from 0 to 114328 Aug 13 00:21:02.534267 systemd-journald[1285]: Rotating system journal. Aug 13 00:21:02.368058 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:21:02.383234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:21:02.395045 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:21:02.416556 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:21:02.440568 udevadm[1326]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:21:02.441328 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:21:02.456935 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:21:02.531823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:21:02.539384 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:21:02.723702 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:21:02.725210 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:21:04.084672 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:21:04.338664 kernel: loop1: detected capacity change from 0 to 114432 Aug 13 00:21:04.487802 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:21:04.502871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:21:05.240387 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Aug 13 00:21:05.240411 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Aug 13 00:21:05.244532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:21:05.602657 kernel: loop2: detected capacity change from 0 to 203944 Aug 13 00:21:05.990666 kernel: loop3: detected capacity change from 0 to 31320 Aug 13 00:21:07.300657 kernel: loop4: detected capacity change from 0 to 114328 Aug 13 00:21:08.264032 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:21:08.277254 kernel: loop5: detected capacity change from 0 to 114432 Aug 13 00:21:08.281815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:21:08.301464 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Aug 13 00:21:08.479673 kernel: loop6: detected capacity change from 0 to 203944 Aug 13 00:21:08.681675 kernel: loop7: detected capacity change from 0 to 31320 Aug 13 00:21:09.172270 (sd-merge)[1351]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 13 00:21:09.172723 (sd-merge)[1351]: Merged extensions into '/usr'. Aug 13 00:21:09.176018 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:21:09.176031 systemd[1]: Reloading... Aug 13 00:21:09.243755 zram_generator::config[1381]: No configuration found. Aug 13 00:21:09.367855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:09.423741 systemd[1]: Reloading finished in 247 ms. Aug 13 00:21:09.450465 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:21:09.467850 systemd[1]: Starting ensure-sysext.service... Aug 13 00:21:09.473156 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:21:09.514936 systemd[1]: Reloading requested from client PID 1434 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:21:09.514951 systemd[1]: Reloading... Aug 13 00:21:09.576130 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:21:09.576410 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:21:09.577905 systemd-tmpfiles[1435]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:21:09.578226 systemd-tmpfiles[1435]: ACLs are not supported, ignoring. Aug 13 00:21:09.578292 systemd-tmpfiles[1435]: ACLs are not supported, ignoring. Aug 13 00:21:09.582654 zram_generator::config[1460]: No configuration found. Aug 13 00:21:09.693825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:09.749991 systemd[1]: Reloading finished in 234 ms. Aug 13 00:21:09.779390 systemd-tmpfiles[1435]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:21:09.779740 systemd-tmpfiles[1435]: Skipping /boot Aug 13 00:21:09.779830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:09.782944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:09.787395 systemd-tmpfiles[1435]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:21:09.787535 systemd-tmpfiles[1435]: Skipping /boot Aug 13 00:21:09.795964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:09.813621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:09.819910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:09.822683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:21:09.831445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:09.831852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:09.839072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:09.840717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:09.848013 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:09.848144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:09.874900 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:21:10.001909 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:21:10.008321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:10.009614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:10.017761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:10.026499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:10.032514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:10.036757 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:21:10.049952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:21:10.057232 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:21:10.064753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:10.064908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:10.072314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:10.072448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:10.081100 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:10.081233 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:10.091715 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Aug 13 00:21:10.097287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:21:10.101895 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:21:10.110434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:21:10.120742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:21:10.129897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:21:10.136312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:21:10.136515 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:21:10.143587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:21:10.143766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:21:10.151024 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:21:10.151156 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:21:10.157815 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:21:10.157949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:21:10.165365 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:21:10.165495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:21:10.177002 systemd[1]: Finished ensure-sysext.service. Aug 13 00:21:10.186290 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:21:10.186434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:21:10.190787 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:21:10.197087 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:21:10.242516 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:21:10.479543 systemd-resolved[1534]: Positive Trust Anchors: Aug 13 00:21:10.479565 systemd-resolved[1534]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:21:10.479597 systemd-resolved[1534]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:21:10.979283 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:21:10.980234 systemd-resolved[1534]: Using system hostname 'ci-4081.3.5-a-c1c2bc5336'. Aug 13 00:21:10.990789 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:21:11.001543 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:21:11.023380 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:21:11.086405 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 13 00:21:11.121994 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Aug 13 00:21:11.147878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:11.531210 augenrules[1593]: No rules Aug 13 00:21:11.537700 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:21:11.538676 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:21:11.553680 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:21:11.553759 kernel: hv_balloon: Memory hot add disabled on ARM64 Aug 13 00:21:11.608671 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:21:11.636659 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:21:11.636744 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:21:11.643192 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:21:11.648150 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:21:11.654986 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:21:11.659017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:21:11.659198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:11.671891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:21:11.881769 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:21:12.506648 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1633) Aug 13 00:21:12.590253 systemd-networkd[1586]: lo: Link UP Aug 13 00:21:12.590269 systemd-networkd[1586]: lo: Gained carrier Aug 13 00:21:12.592213 systemd-networkd[1586]: Enumeration completed Aug 13 00:21:12.592321 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:21:12.599114 systemd-networkd[1586]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:12.599117 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:21:12.599803 systemd[1]: Reached target network.target - Network. Aug 13 00:21:12.610854 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:21:12.657657 kernel: mlx5_core 5bb3:00:02.0 enP23475s1: Link up Aug 13 00:21:12.686900 kernel: hv_netvsc 000d3a07-9351-000d-3a07-9351000d3a07 eth0: Data path switched to VF: enP23475s1 Aug 13 00:21:12.687404 systemd-networkd[1586]: enP23475s1: Link UP Aug 13 00:21:12.687496 systemd-networkd[1586]: eth0: Link UP Aug 13 00:21:12.687499 systemd-networkd[1586]: eth0: Gained carrier Aug 13 00:21:12.687514 systemd-networkd[1586]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:12.692944 systemd-networkd[1586]: enP23475s1: Gained carrier Aug 13 00:21:12.706679 systemd-networkd[1586]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:21:12.944079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 00:21:12.959832 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:21:13.428648 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:21:13.447099 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:21:13.454730 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:21:13.930118 lvm[1671]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:21:14.090233 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:21:14.097935 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:21:14.108814 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:21:14.119260 lvm[1674]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:21:14.232188 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:21:14.541856 systemd-networkd[1586]: eth0: Gained IPv6LL Aug 13 00:21:14.543846 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:21:14.551776 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:21:15.597441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:21:16.511875 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:21:16.520541 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:21:19.996820 ldconfig[1318]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:21:20.005551 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:21:20.016849 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:21:20.044749 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:21:20.051445 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:21:20.057563 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:21:20.066233 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:21:20.073826 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:21:20.080105 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:21:20.087124 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:21:20.094503 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:21:20.094539 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:21:20.099805 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:21:20.118380 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:21:20.126197 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:21:20.136690 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:21:20.143162 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:21:20.149267 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:21:20.154983 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:21:20.160214 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:21:20.160247 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:21:20.203771 systemd[1]: Starting chronyd.service - NTP client/server... Aug 13 00:21:20.211775 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:21:20.222819 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:21:20.229850 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:21:20.238969 (chronyd)[1686]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 13 00:21:20.239769 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:21:20.247943 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:21:20.258038 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:21:20.258089 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Aug 13 00:21:20.261939 chronyd[1695]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 13 00:21:20.265007 jq[1692]: false Aug 13 00:21:20.266851 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Aug 13 00:21:20.273005 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Aug 13 00:21:20.274881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:20.275323 KVP[1696]: KVP starting; pid is:1696 Aug 13 00:21:20.294921 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:21:20.305394 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:21:20.312274 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:21:20.318373 chronyd[1695]: Timezone right/UTC failed leap second check, ignoring Aug 13 00:21:20.318600 chronyd[1695]: Loaded seccomp filter (level 2) Aug 13 00:21:20.320964 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:21:20.330831 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:21:20.339057 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:21:20.347354 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:21:20.347865 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:21:20.354921 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:21:20.362815 extend-filesystems[1693]: Found loop4 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found loop5 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found loop6 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found loop7 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found sda Aug 13 00:21:20.362815 extend-filesystems[1693]: Found sda1 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found sda2 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found sda3 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found usr Aug 13 00:21:20.362815 extend-filesystems[1693]: Found sda4 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found sda6 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found sda7 Aug 13 00:21:20.362815 extend-filesystems[1693]: Found sda9 Aug 13 00:21:20.362815 extend-filesystems[1693]: Checking size of /dev/sda9 Aug 13 00:21:20.540773 kernel: hv_utils: KVP IC version 4.0 Aug 13 00:21:20.540833 update_engine[1708]: I20250813 00:21:20.476866 1708 main.cc:92] Flatcar Update Engine starting Aug 13 00:21:20.548869 extend-filesystems[1693]: Old size kept for /dev/sda9 Aug 13 00:21:20.548869 extend-filesystems[1693]: Found sr0 Aug 13 00:21:20.377264 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:21:20.400510 KVP[1696]: KVP LIC Version: 3.1 Aug 13 00:21:20.393119 systemd[1]: Started chronyd.service - NTP client/server. Aug 13 00:21:20.582744 jq[1712]: true Aug 13 00:21:20.421030 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:21:20.421194 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:21:20.422217 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:21:20.586036 tar[1722]: linux-arm64/helm Aug 13 00:21:20.422372 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:21:20.449066 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:21:20.587587 jq[1727]: true Aug 13 00:21:20.449238 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:21:20.470042 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:21:20.470672 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:21:20.492136 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:21:20.502077 (ntainerd)[1728]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:21:20.510043 systemd-logind[1707]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Aug 13 00:21:20.512816 systemd-logind[1707]: New seat seat0. Aug 13 00:21:20.513522 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:21:20.683563 dbus-daemon[1689]: [system] SELinux support is enabled Aug 13 00:21:20.683770 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:21:20.703812 update_engine[1708]: I20250813 00:21:20.693925 1708 update_check_scheduler.cc:74] Next update check in 4m20s Aug 13 00:21:20.698169 dbus-daemon[1689]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:21:20.703983 bash[1769]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:21:20.694820 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:21:20.694848 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:21:20.707036 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:21:20.707060 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:21:20.718300 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:21:20.730297 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:21:20.743297 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:21:20.766665 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1746) Aug 13 00:21:20.781985 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:21:20.804449 coreos-metadata[1688]: Aug 13 00:21:20.804 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:21:20.814715 coreos-metadata[1688]: Aug 13 00:21:20.814 INFO Fetch successful Aug 13 00:21:20.814715 coreos-metadata[1688]: Aug 13 00:21:20.814 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 13 00:21:20.817855 coreos-metadata[1688]: Aug 13 00:21:20.817 INFO Fetch successful Aug 13 00:21:20.818189 coreos-metadata[1688]: Aug 13 00:21:20.818 INFO Fetching http://168.63.129.16/machine/bdd97b7e-c411-44d7-a3e5-b2fce3d60188/879395d7%2D6dd4%2D4aac%2Dbc39%2D16d5b35ac8f6.%5Fci%2D4081.3.5%2Da%2Dc1c2bc5336?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 13 00:21:20.822309 coreos-metadata[1688]: Aug 13 00:21:20.822 INFO Fetch successful Aug 13 00:21:20.822607 coreos-metadata[1688]: Aug 13 00:21:20.822 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:21:20.835754 coreos-metadata[1688]: Aug 13 00:21:20.835 INFO Fetch successful Aug 13 00:21:20.881154 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:21:20.891433 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:21:21.034896 locksmithd[1786]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:21:21.049665 sshd_keygen[1718]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:21:21.070442 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:21:21.088029 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:21:21.105062 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 13 00:21:21.117193 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:21:21.117368 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:21:21.137964 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:21:21.163121 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 13 00:21:21.196055 containerd[1728]: time="2025-08-13T00:21:21.195139060Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:21:21.189406 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:21:21.215885 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:21:21.232922 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:21:21.242914 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:21:21.259452 containerd[1728]: time="2025-08-13T00:21:21.257979100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:21.259452 containerd[1728]: time="2025-08-13T00:21:21.259411460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:21.259452 containerd[1728]: time="2025-08-13T00:21:21.259441020Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:21:21.259452 containerd[1728]: time="2025-08-13T00:21:21.259457020Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:21:21.259616 containerd[1728]: time="2025-08-13T00:21:21.259602340Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:21:21.259657 containerd[1728]: time="2025-08-13T00:21:21.259618740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260091 containerd[1728]: time="2025-08-13T00:21:21.259724540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260091 containerd[1728]: time="2025-08-13T00:21:21.259745820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260091 containerd[1728]: time="2025-08-13T00:21:21.259914580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260091 containerd[1728]: time="2025-08-13T00:21:21.259930100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260091 containerd[1728]: time="2025-08-13T00:21:21.259943660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260091 containerd[1728]: time="2025-08-13T00:21:21.259952620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260091 containerd[1728]: time="2025-08-13T00:21:21.260020060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260243 containerd[1728]: time="2025-08-13T00:21:21.260191380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260330 containerd[1728]: time="2025-08-13T00:21:21.260295180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:21:21.260330 containerd[1728]: time="2025-08-13T00:21:21.260320060Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:21:21.260421 containerd[1728]: time="2025-08-13T00:21:21.260400180Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:21:21.260471 containerd[1728]: time="2025-08-13T00:21:21.260452940Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:21:21.273359 containerd[1728]: time="2025-08-13T00:21:21.273310260Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:21:21.273472 containerd[1728]: time="2025-08-13T00:21:21.273393300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:21:21.273472 containerd[1728]: time="2025-08-13T00:21:21.273413620Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:21:21.273629 containerd[1728]: time="2025-08-13T00:21:21.273430620Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:21:21.273629 containerd[1728]: time="2025-08-13T00:21:21.273500180Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:21:21.273769 containerd[1728]: time="2025-08-13T00:21:21.273674340Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:21:21.274210 containerd[1728]: time="2025-08-13T00:21:21.274067300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:21:21.274886 containerd[1728]: time="2025-08-13T00:21:21.274812820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:21:21.274886 containerd[1728]: time="2025-08-13T00:21:21.274843620Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:21:21.274886 containerd[1728]: time="2025-08-13T00:21:21.274859140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:21:21.274886 containerd[1728]: time="2025-08-13T00:21:21.274873180Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:21:21.274886 containerd[1728]: time="2025-08-13T00:21:21.274885540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.274898340Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.274912500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.274927540Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.274940580Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.274955340Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.274967980Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.275003780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.275026460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.275044500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.275061300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.275073540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.275087020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.275097940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275106 containerd[1728]: time="2025-08-13T00:21:21.275111220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275123380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275137220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275149820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275165340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275177420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275192260Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275217580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275238340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275249500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:21:21.275875 containerd[1728]: time="2025-08-13T00:21:21.275948100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:21:21.276515 containerd[1728]: time="2025-08-13T00:21:21.275973940Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:21:21.276515 containerd[1728]: time="2025-08-13T00:21:21.275985700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:21:21.276515 containerd[1728]: time="2025-08-13T00:21:21.276070940Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:21:21.276515 containerd[1728]: time="2025-08-13T00:21:21.276082100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.276515 containerd[1728]: time="2025-08-13T00:21:21.276094940Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:21:21.276515 containerd[1728]: time="2025-08-13T00:21:21.276105140Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:21:21.276515 containerd[1728]: time="2025-08-13T00:21:21.276117100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:21:21.276657 containerd[1728]: time="2025-08-13T00:21:21.276389180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:21:21.276657 containerd[1728]: time="2025-08-13T00:21:21.276475780Z" level=info msg="Connect containerd service" Aug 13 00:21:21.276657 containerd[1728]: time="2025-08-13T00:21:21.276506660Z" level=info msg="using legacy CRI server" Aug 13 00:21:21.276657 containerd[1728]: time="2025-08-13T00:21:21.276513740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:21:21.277360 containerd[1728]: time="2025-08-13T00:21:21.276647420Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:21:21.278036 containerd[1728]: time="2025-08-13T00:21:21.277997700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:21:21.278812 containerd[1728]: time="2025-08-13T00:21:21.278751860Z" level=info msg="Start subscribing containerd event" Aug 13 00:21:21.278812 containerd[1728]: time="2025-08-13T00:21:21.278806180Z" level=info msg="Start recovering state" Aug 13 00:21:21.278948 containerd[1728]: time="2025-08-13T00:21:21.278869100Z" level=info msg="Start event monitor" Aug 13 00:21:21.278948 containerd[1728]: time="2025-08-13T00:21:21.278880780Z" level=info msg="Start snapshots syncer" Aug 13 00:21:21.278948 containerd[1728]: time="2025-08-13T00:21:21.278888820Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:21:21.278948 containerd[1728]: time="2025-08-13T00:21:21.278895980Z" level=info msg="Start streaming server" Aug 13 00:21:21.281495 containerd[1728]: time="2025-08-13T00:21:21.281320580Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:21:21.281495 containerd[1728]: time="2025-08-13T00:21:21.281385820Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:21:21.281532 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:21:21.289489 containerd[1728]: time="2025-08-13T00:21:21.289266940Z" level=info msg="containerd successfully booted in 0.094920s" Aug 13 00:21:21.323396 tar[1722]: linux-arm64/LICENSE Aug 13 00:21:21.323758 tar[1722]: linux-arm64/README.md Aug 13 00:21:21.335172 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:21:21.606252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:21.613963 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:21.615951 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:21:21.624731 systemd[1]: Startup finished in 681ms (kernel) + 12.990s (initrd) + 25.250s (userspace) = 38.922s. Aug 13 00:21:22.122241 kubelet[1848]: E0813 00:21:22.122188 1848 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:22.124754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:22.125109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:22.190652 login[1838]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Aug 13 00:21:22.206714 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:22.213864 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:21:22.223470 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:21:22.226287 systemd-logind[1707]: New session 2 of user core. Aug 13 00:21:22.263432 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:21:22.271967 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:21:22.287684 (systemd)[1861]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:21:22.551878 systemd[1861]: Queued start job for default target default.target. Aug 13 00:21:22.558530 systemd[1861]: Created slice app.slice - User Application Slice. Aug 13 00:21:22.558561 systemd[1861]: Reached target paths.target - Paths. Aug 13 00:21:22.558573 systemd[1861]: Reached target timers.target - Timers. Aug 13 00:21:22.559798 systemd[1861]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:21:22.571396 systemd[1861]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:21:22.571519 systemd[1861]: Reached target sockets.target - Sockets. Aug 13 00:21:22.571532 systemd[1861]: Reached target basic.target - Basic System. Aug 13 00:21:22.571576 systemd[1861]: Reached target default.target - Main User Target. Aug 13 00:21:22.571605 systemd[1861]: Startup finished in 278ms. Aug 13 00:21:22.571740 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:21:22.578850 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:21:22.948599 waagent[1832]: 2025-08-13T00:21:22.948451Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 13 00:21:22.954239 waagent[1832]: 2025-08-13T00:21:22.954175Z INFO Daemon Daemon OS: flatcar 4081.3.5 Aug 13 00:21:22.958777 waagent[1832]: 2025-08-13T00:21:22.958728Z INFO Daemon Daemon Python: 3.11.9 Aug 13 00:21:22.963121 waagent[1832]: 2025-08-13T00:21:22.963049Z INFO Daemon Daemon Run daemon Aug 13 00:21:22.967717 waagent[1832]: 2025-08-13T00:21:22.967593Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.5' Aug 13 00:21:22.976586 waagent[1832]: 2025-08-13T00:21:22.976527Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:21:22.982235 waagent[1832]: 2025-08-13T00:21:22.982188Z INFO Daemon Daemon Activate resource disk Aug 13 00:21:22.987021 waagent[1832]: 2025-08-13T00:21:22.986972Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:21:22.998359 waagent[1832]: 2025-08-13T00:21:22.998303Z INFO Daemon Daemon Found device: None Aug 13 00:21:23.002676 waagent[1832]: 2025-08-13T00:21:23.002622Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:21:23.010812 waagent[1832]: 2025-08-13T00:21:23.010768Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:21:23.023534 waagent[1832]: 2025-08-13T00:21:23.023481Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:21:23.029403 waagent[1832]: 2025-08-13T00:21:23.029357Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:21:23.041596 waagent[1832]: 2025-08-13T00:21:23.041531Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 13 00:21:23.055281 waagent[1832]: 2025-08-13T00:21:23.055219Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:21:23.066094 waagent[1832]: 2025-08-13T00:21:23.066034Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:21:23.071143 waagent[1832]: 2025-08-13T00:21:23.071097Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:21:23.172823 waagent[1832]: 2025-08-13T00:21:23.171391Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:21:23.192111 login[1838]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:23.196215 systemd-logind[1707]: New session 1 of user core. Aug 13 00:21:23.201739 waagent[1832]: 2025-08-13T00:21:23.201583Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:21:23.206998 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:21:23.208232 waagent[1832]: 2025-08-13T00:21:23.207105Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:21:23.208817 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:21:23.213609 waagent[1832]: 2025-08-13T00:21:23.213547Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:21:23.226809 waagent[1832]: 2025-08-13T00:21:23.221151Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:21:23.232199 waagent[1832]: 2025-08-13T00:21:23.232064Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:21:23.238970 waagent[1832]: 2025-08-13T00:21:23.237685Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:21:23.301242 waagent[1832]: 2025-08-13T00:21:23.301186Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:21:23.308308 waagent[1832]: 2025-08-13T00:21:23.308277Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:21:23.313820 waagent[1832]: 2025-08-13T00:21:23.313775Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:21:23.554770 waagent[1832]: 2025-08-13T00:21:23.554607Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:21:23.561459 waagent[1832]: 2025-08-13T00:21:23.561397Z INFO Daemon Daemon Forcing an update of the goal state. Aug 13 00:21:23.571085 waagent[1832]: 2025-08-13T00:21:23.571031Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:21:23.616184 waagent[1832]: 2025-08-13T00:21:23.616125Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Aug 13 00:21:23.622239 waagent[1832]: 2025-08-13T00:21:23.622192Z INFO Daemon Aug 13 00:21:23.625151 waagent[1832]: 2025-08-13T00:21:23.625106Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 4b9eba3b-174f-4e86-a400-5f44a724b3db eTag: 14432510482746734904 source: Fabric] Aug 13 00:21:23.637369 waagent[1832]: 2025-08-13T00:21:23.637312Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 13 00:21:23.644749 waagent[1832]: 2025-08-13T00:21:23.644692Z INFO Daemon Aug 13 00:21:23.647691 waagent[1832]: 2025-08-13T00:21:23.647649Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:21:23.659660 waagent[1832]: 2025-08-13T00:21:23.659602Z INFO Daemon Daemon Downloading artifacts profile blob Aug 13 00:21:23.751673 waagent[1832]: 2025-08-13T00:21:23.751069Z INFO Daemon Downloaded certificate {'thumbprint': '185DFFF2990EB4AA1A7804613976182712048EFF', 'hasPrivateKey': True} Aug 13 00:21:23.760947 waagent[1832]: 2025-08-13T00:21:23.760896Z INFO Daemon Downloaded certificate {'thumbprint': '1412200C0E62E0E473484C8DFF0BB0C285D873B4', 'hasPrivateKey': False} Aug 13 00:21:23.770789 waagent[1832]: 2025-08-13T00:21:23.770743Z INFO Daemon Fetch goal state completed Aug 13 00:21:23.781792 waagent[1832]: 2025-08-13T00:21:23.781730Z INFO Daemon Daemon Starting provisioning Aug 13 00:21:23.786868 waagent[1832]: 2025-08-13T00:21:23.786818Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:21:23.791676 waagent[1832]: 2025-08-13T00:21:23.791609Z INFO Daemon Daemon Set hostname [ci-4081.3.5-a-c1c2bc5336] Aug 13 00:21:23.826868 waagent[1832]: 2025-08-13T00:21:23.826798Z INFO Daemon Daemon Publish hostname [ci-4081.3.5-a-c1c2bc5336] Aug 13 00:21:23.833496 waagent[1832]: 2025-08-13T00:21:23.833436Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:21:23.839762 waagent[1832]: 2025-08-13T00:21:23.839713Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:21:23.893548 systemd-networkd[1586]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:21:23.894327 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:21:23.894380 systemd-networkd[1586]: eth0: DHCP lease lost Aug 13 00:21:23.894768 waagent[1832]: 2025-08-13T00:21:23.894682Z INFO Daemon Daemon Create user account if not exists Aug 13 00:21:23.900659 waagent[1832]: 2025-08-13T00:21:23.900344Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:21:23.901724 systemd-networkd[1586]: eth0: DHCPv6 lease lost Aug 13 00:21:23.906517 waagent[1832]: 2025-08-13T00:21:23.906448Z INFO Daemon Daemon Configure sudoer Aug 13 00:21:23.911939 waagent[1832]: 2025-08-13T00:21:23.911861Z INFO Daemon Daemon Configure sshd Aug 13 00:21:23.916446 waagent[1832]: 2025-08-13T00:21:23.916385Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 13 00:21:23.930959 waagent[1832]: 2025-08-13T00:21:23.930870Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:21:23.944696 systemd-networkd[1586]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:21:25.106928 waagent[1832]: 2025-08-13T00:21:25.106850Z INFO Daemon Daemon Provisioning complete Aug 13 00:21:25.125742 waagent[1832]: 2025-08-13T00:21:25.125684Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:21:25.132157 waagent[1832]: 2025-08-13T00:21:25.132105Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:21:25.142879 waagent[1832]: 2025-08-13T00:21:25.142815Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 13 00:21:25.271148 waagent[1915]: 2025-08-13T00:21:25.270483Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 13 00:21:25.271148 waagent[1915]: 2025-08-13T00:21:25.270628Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.5 Aug 13 00:21:25.271148 waagent[1915]: 2025-08-13T00:21:25.270716Z INFO ExtHandler ExtHandler Python: 3.11.9 Aug 13 00:21:25.344997 waagent[1915]: 2025-08-13T00:21:25.344914Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:21:25.345309 waagent[1915]: 2025-08-13T00:21:25.345273Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:21:25.345441 waagent[1915]: 2025-08-13T00:21:25.345409Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:21:25.354016 waagent[1915]: 2025-08-13T00:21:25.353955Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:21:25.364264 waagent[1915]: 2025-08-13T00:21:25.364161Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:21:25.364925 waagent[1915]: 2025-08-13T00:21:25.364884Z INFO ExtHandler Aug 13 00:21:25.365143 waagent[1915]: 2025-08-13T00:21:25.365105Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ee794c53-7c32-41f9-82f1-c5c875853f00 eTag: 14432510482746734904 source: Fabric] Aug 13 00:21:25.366748 waagent[1915]: 2025-08-13T00:21:25.365479Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:21:25.366748 waagent[1915]: 2025-08-13T00:21:25.366061Z INFO ExtHandler Aug 13 00:21:25.366748 waagent[1915]: 2025-08-13T00:21:25.366132Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:21:25.371654 waagent[1915]: 2025-08-13T00:21:25.370404Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:21:25.444910 waagent[1915]: 2025-08-13T00:21:25.444833Z INFO ExtHandler Downloaded certificate {'thumbprint': '185DFFF2990EB4AA1A7804613976182712048EFF', 'hasPrivateKey': True} Aug 13 00:21:25.445451 waagent[1915]: 2025-08-13T00:21:25.445410Z INFO ExtHandler Downloaded certificate {'thumbprint': '1412200C0E62E0E473484C8DFF0BB0C285D873B4', 'hasPrivateKey': False} Aug 13 00:21:25.446024 waagent[1915]: 2025-08-13T00:21:25.445966Z INFO ExtHandler Fetch goal state completed Aug 13 00:21:25.461934 waagent[1915]: 2025-08-13T00:21:25.461883Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1915 Aug 13 00:21:25.462184 waagent[1915]: 2025-08-13T00:21:25.462148Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 13 00:21:25.463922 waagent[1915]: 2025-08-13T00:21:25.463880Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.5', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:21:25.464382 waagent[1915]: 2025-08-13T00:21:25.464343Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:21:25.512849 waagent[1915]: 2025-08-13T00:21:25.512809Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:21:25.513158 waagent[1915]: 2025-08-13T00:21:25.513121Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:21:25.518981 waagent[1915]: 2025-08-13T00:21:25.518946Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:21:25.525655 systemd[1]: Reloading requested from client PID 1930 ('systemctl') (unit waagent.service)... Aug 13 00:21:25.525671 systemd[1]: Reloading... Aug 13 00:21:25.603774 zram_generator::config[1961]: No configuration found. Aug 13 00:21:25.712568 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:25.794301 systemd[1]: Reloading finished in 268 ms. Aug 13 00:21:25.817026 waagent[1915]: 2025-08-13T00:21:25.813810Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 13 00:21:25.821114 systemd[1]: Reloading requested from client PID 2018 ('systemctl') (unit waagent.service)... Aug 13 00:21:25.821224 systemd[1]: Reloading... Aug 13 00:21:25.899674 zram_generator::config[2050]: No configuration found. Aug 13 00:21:26.006115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:26.083561 systemd[1]: Reloading finished in 261 ms. Aug 13 00:21:26.106599 waagent[1915]: 2025-08-13T00:21:26.103905Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 13 00:21:26.106599 waagent[1915]: 2025-08-13T00:21:26.104084Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 13 00:21:27.189653 waagent[1915]: 2025-08-13T00:21:27.189552Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 13 00:21:27.190237 waagent[1915]: 2025-08-13T00:21:27.190179Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 13 00:21:27.191071 waagent[1915]: 2025-08-13T00:21:27.190983Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:21:27.191522 waagent[1915]: 2025-08-13T00:21:27.191429Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:21:27.191984 waagent[1915]: 2025-08-13T00:21:27.191868Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:21:27.192184 waagent[1915]: 2025-08-13T00:21:27.191984Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:21:27.192184 waagent[1915]: 2025-08-13T00:21:27.192091Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:21:27.193196 waagent[1915]: 2025-08-13T00:21:27.192345Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:21:27.193196 waagent[1915]: 2025-08-13T00:21:27.192436Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:21:27.193196 waagent[1915]: 2025-08-13T00:21:27.192568Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:21:27.193196 waagent[1915]: 2025-08-13T00:21:27.192624Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:21:27.193196 waagent[1915]: 2025-08-13T00:21:27.192703Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:21:27.195893 waagent[1915]: 2025-08-13T00:21:27.195828Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:21:27.196179 waagent[1915]: 2025-08-13T00:21:27.196131Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:21:27.196364 waagent[1915]: 2025-08-13T00:21:27.196326Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:21:27.197171 waagent[1915]: 2025-08-13T00:21:27.196628Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:21:27.197171 waagent[1915]: 2025-08-13T00:21:27.196721Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:21:27.198961 waagent[1915]: 2025-08-13T00:21:27.198907Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:21:27.198961 waagent[1915]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:21:27.198961 waagent[1915]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:21:27.198961 waagent[1915]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:21:27.198961 waagent[1915]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:21:27.198961 waagent[1915]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:21:27.198961 waagent[1915]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:21:27.203608 waagent[1915]: 2025-08-13T00:21:27.203553Z INFO ExtHandler ExtHandler Aug 13 00:21:27.204476 waagent[1915]: 2025-08-13T00:21:27.204114Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4d49328c-3677-487e-8f71-ec2f57bde2b8 correlation 83130b04-2b4c-432f-ab0a-db54f49ed7e2 created: 2025-08-13T00:20:00.530533Z] Aug 13 00:21:27.204671 waagent[1915]: 2025-08-13T00:21:27.204591Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:21:27.207080 waagent[1915]: 2025-08-13T00:21:27.207033Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Aug 13 00:21:27.242374 waagent[1915]: 2025-08-13T00:21:27.242305Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 99E3B624-506F-4FE7-8FBF-A43F05A3A66D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 13 00:21:27.290335 waagent[1915]: 2025-08-13T00:21:27.290238Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:21:27.290335 waagent[1915]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:21:27.290335 waagent[1915]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:21:27.290335 waagent[1915]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:93:51 brd ff:ff:ff:ff:ff:ff Aug 13 00:21:27.290335 waagent[1915]: 3: enP23475s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:93:51 brd ff:ff:ff:ff:ff:ff\ altname enP23475p0s2 Aug 13 00:21:27.290335 waagent[1915]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:21:27.290335 waagent[1915]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:21:27.290335 waagent[1915]: 2: eth0 inet 10.200.20.42/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:21:27.290335 waagent[1915]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:21:27.290335 waagent[1915]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 13 00:21:27.290335 waagent[1915]: 2: eth0 inet6 fe80::20d:3aff:fe07:9351/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 13 00:21:27.326393 waagent[1915]: 2025-08-13T00:21:27.326318Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 13 00:21:27.326393 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:27.326393 waagent[1915]: pkts bytes target prot opt in out source destination Aug 13 00:21:27.326393 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:27.326393 waagent[1915]: pkts bytes target prot opt in out source destination Aug 13 00:21:27.326393 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:27.326393 waagent[1915]: pkts bytes target prot opt in out source destination Aug 13 00:21:27.326393 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:21:27.326393 waagent[1915]: 7 569 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:21:27.326393 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:21:27.329663 waagent[1915]: 2025-08-13T00:21:27.329564Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 13 00:21:27.329663 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:27.329663 waagent[1915]: pkts bytes target prot opt in out source destination Aug 13 00:21:27.329663 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:27.329663 waagent[1915]: pkts bytes target prot opt in out source destination Aug 13 00:21:27.329663 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:21:27.329663 waagent[1915]: pkts bytes target prot opt in out source destination Aug 13 00:21:27.329663 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:21:27.329663 waagent[1915]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:21:27.329663 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:21:27.330108 waagent[1915]: 2025-08-13T00:21:27.330025Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:21:32.375759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:21:32.383815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:32.507125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:32.511282 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:32.599807 kubelet[2145]: E0813 00:21:32.599713 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:32.602993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:32.603279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:42.712153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:21:42.721847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:42.829266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:42.839944 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:42.939106 kubelet[2160]: E0813 00:21:42.939044 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:42.941712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:42.941966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:44.113198 chronyd[1695]: Selected source PHC0 Aug 13 00:21:47.448902 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:21:47.457943 systemd[1]: Started sshd@0-10.200.20.42:22-10.200.16.10:41608.service - OpenSSH per-connection server daemon (10.200.16.10:41608). Aug 13 00:21:47.959666 sshd[2168]: Accepted publickey for core from 10.200.16.10 port 41608 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:47.960996 sshd[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:47.965940 systemd-logind[1707]: New session 3 of user core. Aug 13 00:21:47.971877 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:21:48.376253 systemd[1]: Started sshd@1-10.200.20.42:22-10.200.16.10:41612.service - OpenSSH per-connection server daemon (10.200.16.10:41612). Aug 13 00:21:48.822473 sshd[2173]: Accepted publickey for core from 10.200.16.10 port 41612 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:48.823852 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:48.827665 systemd-logind[1707]: New session 4 of user core. Aug 13 00:21:48.835806 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:21:49.164916 sshd[2173]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:49.168683 systemd[1]: sshd@1-10.200.20.42:22-10.200.16.10:41612.service: Deactivated successfully. Aug 13 00:21:49.170421 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:21:49.171164 systemd-logind[1707]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:21:49.172281 systemd-logind[1707]: Removed session 4. Aug 13 00:21:49.246228 systemd[1]: Started sshd@2-10.200.20.42:22-10.200.16.10:41626.service - OpenSSH per-connection server daemon (10.200.16.10:41626). Aug 13 00:21:49.695081 sshd[2180]: Accepted publickey for core from 10.200.16.10 port 41626 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:49.696392 sshd[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:49.701352 systemd-logind[1707]: New session 5 of user core. Aug 13 00:21:49.707834 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:21:50.036146 sshd[2180]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:50.039431 systemd[1]: sshd@2-10.200.20.42:22-10.200.16.10:41626.service: Deactivated successfully. Aug 13 00:21:50.042155 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:21:50.043083 systemd-logind[1707]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:21:50.044228 systemd-logind[1707]: Removed session 5. Aug 13 00:21:50.121889 systemd[1]: Started sshd@3-10.200.20.42:22-10.200.16.10:41628.service - OpenSSH per-connection server daemon (10.200.16.10:41628). Aug 13 00:21:50.592891 sshd[2187]: Accepted publickey for core from 10.200.16.10 port 41628 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:50.594186 sshd[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:50.599089 systemd-logind[1707]: New session 6 of user core. Aug 13 00:21:50.605943 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:21:50.935428 sshd[2187]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:50.938757 systemd-logind[1707]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:21:50.938925 systemd[1]: sshd@3-10.200.20.42:22-10.200.16.10:41628.service: Deactivated successfully. Aug 13 00:21:50.940345 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:21:50.942975 systemd-logind[1707]: Removed session 6. Aug 13 00:21:51.023960 systemd[1]: Started sshd@4-10.200.20.42:22-10.200.16.10:35422.service - OpenSSH per-connection server daemon (10.200.16.10:35422). Aug 13 00:21:51.498331 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 35422 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:51.499660 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:51.504595 systemd-logind[1707]: New session 7 of user core. Aug 13 00:21:51.509813 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:21:51.990369 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:21:51.990671 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:21:52.020440 sudo[2197]: pam_unix(sudo:session): session closed for user root Aug 13 00:21:52.111005 sshd[2194]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:52.114870 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:21:52.115693 systemd[1]: sshd@4-10.200.20.42:22-10.200.16.10:35422.service: Deactivated successfully. Aug 13 00:21:52.118328 systemd-logind[1707]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:21:52.119377 systemd-logind[1707]: Removed session 7. Aug 13 00:21:52.195863 systemd[1]: Started sshd@5-10.200.20.42:22-10.200.16.10:35430.service - OpenSSH per-connection server daemon (10.200.16.10:35430). Aug 13 00:21:52.645174 sshd[2202]: Accepted publickey for core from 10.200.16.10 port 35430 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:52.646571 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:52.650528 systemd-logind[1707]: New session 8 of user core. Aug 13 00:21:52.661848 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:21:52.901316 sudo[2206]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:21:52.901991 sudo[2206]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:21:52.905243 sudo[2206]: pam_unix(sudo:session): session closed for user root Aug 13 00:21:52.909737 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:21:52.909990 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:21:52.932135 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:21:52.933275 auditctl[2209]: No rules Aug 13 00:21:52.933580 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:21:52.933784 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:21:52.936726 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:21:52.954451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:21:52.957920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:52.978706 augenrules[2230]: No rules Aug 13 00:21:52.980973 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:21:52.984128 sudo[2205]: pam_unix(sudo:session): session closed for user root Aug 13 00:21:53.070340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:53.074926 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:21:53.075713 sshd[2202]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:53.078560 systemd[1]: sshd@5-10.200.20.42:22-10.200.16.10:35430.service: Deactivated successfully. Aug 13 00:21:53.080266 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:21:53.083052 systemd-logind[1707]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:21:53.084213 systemd-logind[1707]: Removed session 8. Aug 13 00:21:53.154095 systemd[1]: Started sshd@6-10.200.20.42:22-10.200.16.10:35442.service - OpenSSH per-connection server daemon (10.200.16.10:35442). Aug 13 00:21:53.205101 kubelet[2240]: E0813 00:21:53.205060 2240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:21:53.207709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:21:53.207937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:21:53.601485 sshd[2248]: Accepted publickey for core from 10.200.16.10 port 35442 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:21:53.602800 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:53.606727 systemd-logind[1707]: New session 9 of user core. Aug 13 00:21:53.616979 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:21:53.856512 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:21:53.856827 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:21:55.078939 (dockerd)[2268]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:21:55.078946 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:21:55.762846 dockerd[2268]: time="2025-08-13T00:21:55.762784640Z" level=info msg="Starting up" Aug 13 00:21:56.120788 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1133771979-merged.mount: Deactivated successfully. Aug 13 00:21:56.212808 dockerd[2268]: time="2025-08-13T00:21:56.212721541Z" level=info msg="Loading containers: start." Aug 13 00:21:56.437774 kernel: Initializing XFRM netlink socket Aug 13 00:21:56.633305 systemd-networkd[1586]: docker0: Link UP Aug 13 00:21:56.656765 dockerd[2268]: time="2025-08-13T00:21:56.656716314Z" level=info msg="Loading containers: done." Aug 13 00:21:56.680581 dockerd[2268]: time="2025-08-13T00:21:56.680510193Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:21:56.680758 dockerd[2268]: time="2025-08-13T00:21:56.680668033Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:21:56.680813 dockerd[2268]: time="2025-08-13T00:21:56.680782393Z" level=info msg="Daemon has completed initialization" Aug 13 00:21:56.762519 dockerd[2268]: time="2025-08-13T00:21:56.762454808Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:21:56.763776 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:21:57.915582 containerd[1728]: time="2025-08-13T00:21:57.915251709Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:21:58.924047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473407628.mount: Deactivated successfully. Aug 13 00:21:59.648705 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Aug 13 00:22:00.729697 containerd[1728]: time="2025-08-13T00:22:00.728615733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:00.731623 containerd[1728]: time="2025-08-13T00:22:00.731377539Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651813" Aug 13 00:22:00.734390 containerd[1728]: time="2025-08-13T00:22:00.734331706Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:00.738883 containerd[1728]: time="2025-08-13T00:22:00.738818836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:00.740522 containerd[1728]: time="2025-08-13T00:22:00.739993679Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 2.82469749s" Aug 13 00:22:00.740522 containerd[1728]: time="2025-08-13T00:22:00.740039119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:22:00.741525 containerd[1728]: time="2025-08-13T00:22:00.741360162Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:22:01.909693 containerd[1728]: time="2025-08-13T00:22:01.908752727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:01.911296 containerd[1728]: time="2025-08-13T00:22:01.911078452Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460283" Aug 13 00:22:01.914554 containerd[1728]: time="2025-08-13T00:22:01.914505740Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:01.919293 containerd[1728]: time="2025-08-13T00:22:01.919241990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:01.920289 containerd[1728]: time="2025-08-13T00:22:01.920245832Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.17885207s" Aug 13 00:22:01.920366 containerd[1728]: time="2025-08-13T00:22:01.920287952Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:22:01.921214 containerd[1728]: time="2025-08-13T00:22:01.920923634Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:22:03.015689 containerd[1728]: time="2025-08-13T00:22:03.015144836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:03.018194 containerd[1728]: time="2025-08-13T00:22:03.017944962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125089" Aug 13 00:22:03.022513 containerd[1728]: time="2025-08-13T00:22:03.022477452Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:03.026751 containerd[1728]: time="2025-08-13T00:22:03.026695302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:03.028429 containerd[1728]: time="2025-08-13T00:22:03.027717544Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.10675907s" Aug 13 00:22:03.028429 containerd[1728]: time="2025-08-13T00:22:03.027756344Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:22:03.028429 containerd[1728]: time="2025-08-13T00:22:03.028286305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:22:03.211925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:22:03.220977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:03.319904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:03.323481 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:22:03.357398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:22:03.728038 kubelet[2473]: E0813 00:22:03.355796 2473 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:22:03.357516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:22:04.669474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399054898.mount: Deactivated successfully. Aug 13 00:22:05.012618 containerd[1728]: time="2025-08-13T00:22:05.012559373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:05.014691 containerd[1728]: time="2025-08-13T00:22:05.014658858Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915993" Aug 13 00:22:05.017601 containerd[1728]: time="2025-08-13T00:22:05.017546624Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:05.021162 containerd[1728]: time="2025-08-13T00:22:05.021113592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:05.021895 containerd[1728]: time="2025-08-13T00:22:05.021775794Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.993464049s" Aug 13 00:22:05.021895 containerd[1728]: time="2025-08-13T00:22:05.021807474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:22:05.022859 containerd[1728]: time="2025-08-13T00:22:05.022816556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:22:05.638217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222864068.mount: Deactivated successfully. Aug 13 00:22:05.710843 update_engine[1708]: I20250813 00:22:05.710781 1708 update_attempter.cc:509] Updating boot flags... Aug 13 00:22:05.786891 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2508) Aug 13 00:22:05.883661 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2510) Aug 13 00:22:07.242972 containerd[1728]: time="2025-08-13T00:22:07.242925750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.245797 containerd[1728]: time="2025-08-13T00:22:07.245754277Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Aug 13 00:22:07.248507 containerd[1728]: time="2025-08-13T00:22:07.248474123Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.254120 containerd[1728]: time="2025-08-13T00:22:07.254082255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.255374 containerd[1728]: time="2025-08-13T00:22:07.255335698Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.232468422s" Aug 13 00:22:07.255374 containerd[1728]: time="2025-08-13T00:22:07.255370258Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:22:07.255841 containerd[1728]: time="2025-08-13T00:22:07.255741019Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:22:07.802437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount912813928.mount: Deactivated successfully. Aug 13 00:22:07.819677 containerd[1728]: time="2025-08-13T00:22:07.819569997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.824293 containerd[1728]: time="2025-08-13T00:22:07.824111367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 13 00:22:07.826990 containerd[1728]: time="2025-08-13T00:22:07.826938014Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.830816 containerd[1728]: time="2025-08-13T00:22:07.830765982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.831863 containerd[1728]: time="2025-08-13T00:22:07.831741104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 575.975805ms" Aug 13 00:22:07.831863 containerd[1728]: time="2025-08-13T00:22:07.831773264Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:22:07.832371 containerd[1728]: time="2025-08-13T00:22:07.832335826Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:22:08.451880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797792269.mount: Deactivated successfully. Aug 13 00:22:11.573924 containerd[1728]: time="2025-08-13T00:22:11.573858330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.576122 containerd[1728]: time="2025-08-13T00:22:11.575864936Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Aug 13 00:22:11.578796 containerd[1728]: time="2025-08-13T00:22:11.578722023Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.583329 containerd[1728]: time="2025-08-13T00:22:11.583278235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.584584 containerd[1728]: time="2025-08-13T00:22:11.584458398Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.752083292s" Aug 13 00:22:11.584584 containerd[1728]: time="2025-08-13T00:22:11.584494478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:22:13.461960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:22:13.469910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:13.569808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:13.585039 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:22:13.621014 kubelet[2693]: E0813 00:22:13.620960 2693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:22:13.624070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:22:13.624207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:22:17.997159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:18.007867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:18.033495 systemd[1]: Reloading requested from client PID 2707 ('systemctl') (unit session-9.scope)... Aug 13 00:22:18.033513 systemd[1]: Reloading... Aug 13 00:22:18.157667 zram_generator::config[2750]: No configuration found. Aug 13 00:22:18.266182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:22:18.345271 systemd[1]: Reloading finished in 311 ms. Aug 13 00:22:18.389579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:18.393002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:18.396172 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:22:18.396403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:18.401926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:18.564819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:18.574974 (kubelet)[2816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:22:18.740659 kubelet[2816]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:18.740659 kubelet[2816]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:22:18.740659 kubelet[2816]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:18.740659 kubelet[2816]: I0813 00:22:18.739080 2816 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:22:19.779476 kubelet[2816]: I0813 00:22:19.779432 2816 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:22:19.779476 kubelet[2816]: I0813 00:22:19.779467 2816 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:22:19.779860 kubelet[2816]: I0813 00:22:19.779792 2816 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:22:19.800246 kubelet[2816]: E0813 00:22:19.800196 2816 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:19.801984 kubelet[2816]: I0813 00:22:19.801947 2816 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:22:19.807543 kubelet[2816]: E0813 00:22:19.807479 2816 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:22:19.807681 kubelet[2816]: I0813 00:22:19.807560 2816 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:22:19.812656 kubelet[2816]: I0813 00:22:19.812233 2816 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:22:19.812656 kubelet[2816]: I0813 00:22:19.812373 2816 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:22:19.812656 kubelet[2816]: I0813 00:22:19.812529 2816 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:22:19.812838 kubelet[2816]: I0813 00:22:19.812549 2816 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-c1c2bc5336","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:22:19.812933 kubelet[2816]: I0813 00:22:19.812844 2816 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:22:19.812933 kubelet[2816]: I0813 00:22:19.812854 2816 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:22:19.813006 kubelet[2816]: I0813 00:22:19.812973 2816 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:19.816698 kubelet[2816]: I0813 00:22:19.816463 2816 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:22:19.816698 kubelet[2816]: I0813 00:22:19.816696 2816 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:22:19.816862 kubelet[2816]: I0813 00:22:19.816721 2816 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:22:19.816862 kubelet[2816]: I0813 00:22:19.816738 2816 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:22:19.821253 kubelet[2816]: W0813 00:22:19.821137 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-c1c2bc5336&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Aug 13 00:22:19.821253 kubelet[2816]: E0813 00:22:19.821206 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-c1c2bc5336&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:19.821463 kubelet[2816]: I0813 00:22:19.821449 2816 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:22:19.821981 kubelet[2816]: I0813 00:22:19.821964 2816 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:22:19.822152 kubelet[2816]: W0813 00:22:19.822140 2816 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:22:19.824510 kubelet[2816]: I0813 00:22:19.824489 2816 server.go:1274] "Started kubelet" Aug 13 00:22:19.826309 kubelet[2816]: W0813 00:22:19.826249 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Aug 13 00:22:19.826377 kubelet[2816]: E0813 00:22:19.826317 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:19.826458 kubelet[2816]: I0813 00:22:19.826424 2816 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:22:19.827284 kubelet[2816]: I0813 00:22:19.827257 2816 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:22:19.827578 kubelet[2816]: I0813 00:22:19.827530 2816 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:22:19.827972 kubelet[2816]: I0813 00:22:19.827950 2816 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:22:19.829825 kubelet[2816]: E0813 00:22:19.828448 2816 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-a-c1c2bc5336.185b2bb5d939e17f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-a-c1c2bc5336,UID:ci-4081.3.5-a-c1c2bc5336,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-a-c1c2bc5336,},FirstTimestamp:2025-08-13 00:22:19.824464255 +0000 UTC m=+1.246308506,LastTimestamp:2025-08-13 00:22:19.824464255 +0000 UTC m=+1.246308506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-a-c1c2bc5336,}" Aug 13 00:22:19.830851 kubelet[2816]: I0813 00:22:19.830816 2816 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:22:19.832990 kubelet[2816]: E0813 00:22:19.832406 2816 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:22:19.832990 kubelet[2816]: I0813 00:22:19.832529 2816 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:22:19.835299 kubelet[2816]: E0813 00:22:19.835276 2816 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-c1c2bc5336\" not found" Aug 13 00:22:19.835416 kubelet[2816]: I0813 00:22:19.835405 2816 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:22:19.835678 kubelet[2816]: I0813 00:22:19.835660 2816 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:22:19.835807 kubelet[2816]: I0813 00:22:19.835797 2816 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:22:19.836549 kubelet[2816]: I0813 00:22:19.836529 2816 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:22:19.836772 kubelet[2816]: I0813 00:22:19.836753 2816 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:22:19.837270 kubelet[2816]: E0813 00:22:19.837241 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-c1c2bc5336?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="200ms" Aug 13 00:22:19.837433 kubelet[2816]: W0813 00:22:19.837401 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Aug 13 00:22:19.837517 kubelet[2816]: E0813 00:22:19.837502 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:19.838725 kubelet[2816]: I0813 00:22:19.838707 2816 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:22:19.894126 kubelet[2816]: I0813 00:22:19.894083 2816 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:22:19.894126 kubelet[2816]: I0813 00:22:19.894118 2816 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:22:19.894264 kubelet[2816]: I0813 00:22:19.894139 2816 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:19.898783 kubelet[2816]: I0813 00:22:19.898755 2816 policy_none.go:49] "None policy: Start" Aug 13 00:22:19.899672 kubelet[2816]: I0813 00:22:19.899353 2816 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:22:19.899672 kubelet[2816]: I0813 00:22:19.899382 2816 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:22:19.913270 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:22:19.922619 kubelet[2816]: I0813 00:22:19.920112 2816 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:22:19.922619 kubelet[2816]: I0813 00:22:19.921195 2816 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:22:19.922619 kubelet[2816]: I0813 00:22:19.921216 2816 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:22:19.922619 kubelet[2816]: I0813 00:22:19.921236 2816 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:22:19.922763 kubelet[2816]: E0813 00:22:19.921558 2816 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:22:19.925344 kubelet[2816]: W0813 00:22:19.925318 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Aug 13 00:22:19.925477 kubelet[2816]: E0813 00:22:19.925458 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:19.929126 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:22:19.932529 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:22:19.935728 kubelet[2816]: E0813 00:22:19.935698 2816 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-c1c2bc5336\" not found" Aug 13 00:22:19.943523 kubelet[2816]: I0813 00:22:19.943504 2816 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:22:19.944219 kubelet[2816]: I0813 00:22:19.943815 2816 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:22:19.944219 kubelet[2816]: I0813 00:22:19.943830 2816 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:22:19.944219 kubelet[2816]: I0813 00:22:19.944117 2816 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:22:19.946316 kubelet[2816]: E0813 00:22:19.946290 2816 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-a-c1c2bc5336\" not found" Aug 13 00:22:20.033884 systemd[1]: Created slice kubepods-burstable-podd4e661494e6752f4d82955fba22900c7.slice - libcontainer container kubepods-burstable-podd4e661494e6752f4d82955fba22900c7.slice. Aug 13 00:22:20.037860 kubelet[2816]: E0813 00:22:20.037761 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-c1c2bc5336?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="400ms" Aug 13 00:22:20.047038 kubelet[2816]: I0813 00:22:20.046724 2816 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.047683 systemd[1]: Created slice kubepods-burstable-pod7f81d3be4450e3733609e829ae2564e1.slice - libcontainer container kubepods-burstable-pod7f81d3be4450e3733609e829ae2564e1.slice. Aug 13 00:22:20.048205 kubelet[2816]: E0813 00:22:20.048000 2816 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.062364 systemd[1]: Created slice kubepods-burstable-pod876898abff0bc254b6bf09c588f8144e.slice - libcontainer container kubepods-burstable-pod876898abff0bc254b6bf09c588f8144e.slice. Aug 13 00:22:20.136755 kubelet[2816]: I0813 00:22:20.136670 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4e661494e6752f4d82955fba22900c7-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-c1c2bc5336\" (UID: \"d4e661494e6752f4d82955fba22900c7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.136755 kubelet[2816]: I0813 00:22:20.136710 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4e661494e6752f4d82955fba22900c7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-c1c2bc5336\" (UID: \"d4e661494e6752f4d82955fba22900c7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.136755 kubelet[2816]: I0813 00:22:20.136730 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.136755 kubelet[2816]: I0813 00:22:20.136752 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.136755 kubelet[2816]: I0813 00:22:20.136767 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.136977 kubelet[2816]: I0813 00:22:20.136783 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.136977 kubelet[2816]: I0813 00:22:20.136801 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4e661494e6752f4d82955fba22900c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-c1c2bc5336\" (UID: \"d4e661494e6752f4d82955fba22900c7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.136977 kubelet[2816]: I0813 00:22:20.136817 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.136977 kubelet[2816]: I0813 00:22:20.136832 2816 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/876898abff0bc254b6bf09c588f8144e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-c1c2bc5336\" (UID: \"876898abff0bc254b6bf09c588f8144e\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.250719 kubelet[2816]: I0813 00:22:20.250398 2816 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.250845 kubelet[2816]: E0813 00:22:20.250734 2816 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.346332 containerd[1728]: time="2025-08-13T00:22:20.346158203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-c1c2bc5336,Uid:d4e661494e6752f4d82955fba22900c7,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:20.360888 containerd[1728]: time="2025-08-13T00:22:20.360843470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-c1c2bc5336,Uid:7f81d3be4450e3733609e829ae2564e1,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:20.366178 containerd[1728]: time="2025-08-13T00:22:20.365940759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-c1c2bc5336,Uid:876898abff0bc254b6bf09c588f8144e,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:20.439037 kubelet[2816]: E0813 00:22:20.438973 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-c1c2bc5336?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="800ms" Aug 13 00:22:20.653477 kubelet[2816]: I0813 00:22:20.652951 2816 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.653477 kubelet[2816]: E0813 00:22:20.653287 2816 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:20.657998 kubelet[2816]: W0813 00:22:20.657907 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-c1c2bc5336&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Aug 13 00:22:20.657998 kubelet[2816]: E0813 00:22:20.657975 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-a-c1c2bc5336&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:21.002745 kubelet[2816]: W0813 00:22:21.002668 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Aug 13 00:22:21.002745 kubelet[2816]: E0813 00:22:21.002715 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:21.193934 kubelet[2816]: W0813 00:22:21.193870 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Aug 13 00:22:21.194047 kubelet[2816]: E0813 00:22:21.193941 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:21.239876 kubelet[2816]: E0813 00:22:21.239833 2816 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-a-c1c2bc5336?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="1.6s" Aug 13 00:22:21.307784 kubelet[2816]: W0813 00:22:21.307595 2816 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Aug 13 00:22:21.307784 kubelet[2816]: E0813 00:22:21.307692 2816 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:21.455509 kubelet[2816]: I0813 00:22:21.455212 2816 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:21.455661 kubelet[2816]: E0813 00:22:21.455531 2816 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:21.680090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396221689.mount: Deactivated successfully. Aug 13 00:22:21.699674 containerd[1728]: time="2025-08-13T00:22:21.699072022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:21.707802 containerd[1728]: time="2025-08-13T00:22:21.707753598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 13 00:22:21.711651 containerd[1728]: time="2025-08-13T00:22:21.710905524Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:21.714156 containerd[1728]: time="2025-08-13T00:22:21.714109210Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:21.717473 containerd[1728]: time="2025-08-13T00:22:21.716787215Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:21.719900 containerd[1728]: time="2025-08-13T00:22:21.719677580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:22:21.722745 containerd[1728]: time="2025-08-13T00:22:21.722614785Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:22:21.725532 containerd[1728]: time="2025-08-13T00:22:21.725481670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:22:21.726723 containerd[1728]: time="2025-08-13T00:22:21.726273312Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.380038228s" Aug 13 00:22:21.730802 containerd[1728]: time="2025-08-13T00:22:21.730764080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.36475408s" Aug 13 00:22:21.740017 containerd[1728]: time="2025-08-13T00:22:21.739548456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.378620226s" Aug 13 00:22:21.807027 kubelet[2816]: E0813 00:22:21.806984 2816 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:22:22.406981 containerd[1728]: time="2025-08-13T00:22:22.404992145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:22.406981 containerd[1728]: time="2025-08-13T00:22:22.405050145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:22.406981 containerd[1728]: time="2025-08-13T00:22:22.405074945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:22.406981 containerd[1728]: time="2025-08-13T00:22:22.405153746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:22.408572 containerd[1728]: time="2025-08-13T00:22:22.408406232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:22.408682 containerd[1728]: time="2025-08-13T00:22:22.408522592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:22.408682 containerd[1728]: time="2025-08-13T00:22:22.408544072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:22.409349 containerd[1728]: time="2025-08-13T00:22:22.409274553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:22.413403 containerd[1728]: time="2025-08-13T00:22:22.413316440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:22.413403 containerd[1728]: time="2025-08-13T00:22:22.413367081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:22.413560 containerd[1728]: time="2025-08-13T00:22:22.413385401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:22.413560 containerd[1728]: time="2025-08-13T00:22:22.413464161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:22.445817 systemd[1]: Started cri-containerd-3473b52f9be5ecddf89880d76050da1f4e99ff9eaa58cd2dcf88490a46a649e7.scope - libcontainer container 3473b52f9be5ecddf89880d76050da1f4e99ff9eaa58cd2dcf88490a46a649e7. Aug 13 00:22:22.451237 systemd[1]: Started cri-containerd-9c423dd7d49051efa1eb62c51aef868a9f908843a81296d85f6ad795887f8f34.scope - libcontainer container 9c423dd7d49051efa1eb62c51aef868a9f908843a81296d85f6ad795887f8f34. Aug 13 00:22:22.454097 systemd[1]: Started cri-containerd-e46f2454c604ac6eeaf37d8a703e167b48c05019fe1c6cf1d65d80554b75b5c3.scope - libcontainer container e46f2454c604ac6eeaf37d8a703e167b48c05019fe1c6cf1d65d80554b75b5c3. Aug 13 00:22:22.499615 containerd[1728]: time="2025-08-13T00:22:22.498579475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-a-c1c2bc5336,Uid:7f81d3be4450e3733609e829ae2564e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3473b52f9be5ecddf89880d76050da1f4e99ff9eaa58cd2dcf88490a46a649e7\"" Aug 13 00:22:22.505948 containerd[1728]: time="2025-08-13T00:22:22.505911329Z" level=info msg="CreateContainer within sandbox \"3473b52f9be5ecddf89880d76050da1f4e99ff9eaa58cd2dcf88490a46a649e7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:22:22.510826 containerd[1728]: time="2025-08-13T00:22:22.510791258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-a-c1c2bc5336,Uid:876898abff0bc254b6bf09c588f8144e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c423dd7d49051efa1eb62c51aef868a9f908843a81296d85f6ad795887f8f34\"" Aug 13 00:22:22.511737 containerd[1728]: time="2025-08-13T00:22:22.511700099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-a-c1c2bc5336,Uid:d4e661494e6752f4d82955fba22900c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e46f2454c604ac6eeaf37d8a703e167b48c05019fe1c6cf1d65d80554b75b5c3\"" Aug 13 00:22:22.514554 containerd[1728]: time="2025-08-13T00:22:22.514524944Z" level=info msg="CreateContainer within sandbox \"9c423dd7d49051efa1eb62c51aef868a9f908843a81296d85f6ad795887f8f34\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:22:22.515927 containerd[1728]: time="2025-08-13T00:22:22.515832027Z" level=info msg="CreateContainer within sandbox \"e46f2454c604ac6eeaf37d8a703e167b48c05019fe1c6cf1d65d80554b75b5c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:22:22.547732 containerd[1728]: time="2025-08-13T00:22:22.547688285Z" level=info msg="CreateContainer within sandbox \"3473b52f9be5ecddf89880d76050da1f4e99ff9eaa58cd2dcf88490a46a649e7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82d4a6da18ca19908f85ffa4cf4587daab69546d9de75763db24fd0135d6389a\"" Aug 13 00:22:22.548617 containerd[1728]: time="2025-08-13T00:22:22.548559046Z" level=info msg="StartContainer for \"82d4a6da18ca19908f85ffa4cf4587daab69546d9de75763db24fd0135d6389a\"" Aug 13 00:22:22.573273 containerd[1728]: time="2025-08-13T00:22:22.573228051Z" level=info msg="CreateContainer within sandbox \"9c423dd7d49051efa1eb62c51aef868a9f908843a81296d85f6ad795887f8f34\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c1645c1151a81bdf9994a465eff0cf9e370fa6cd954bd6b7f379237423e7e0c\"" Aug 13 00:22:22.574112 containerd[1728]: time="2025-08-13T00:22:22.574090013Z" level=info msg="StartContainer for \"2c1645c1151a81bdf9994a465eff0cf9e370fa6cd954bd6b7f379237423e7e0c\"" Aug 13 00:22:22.576630 systemd[1]: Started cri-containerd-82d4a6da18ca19908f85ffa4cf4587daab69546d9de75763db24fd0135d6389a.scope - libcontainer container 82d4a6da18ca19908f85ffa4cf4587daab69546d9de75763db24fd0135d6389a. Aug 13 00:22:22.587703 containerd[1728]: time="2025-08-13T00:22:22.587655797Z" level=info msg="CreateContainer within sandbox \"e46f2454c604ac6eeaf37d8a703e167b48c05019fe1c6cf1d65d80554b75b5c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77914279598c481804e077a332df2a784859bdbf2f69946befceed60a0ed6ea3\"" Aug 13 00:22:22.589042 containerd[1728]: time="2025-08-13T00:22:22.588848439Z" level=info msg="StartContainer for \"77914279598c481804e077a332df2a784859bdbf2f69946befceed60a0ed6ea3\"" Aug 13 00:22:22.611299 systemd[1]: Started cri-containerd-2c1645c1151a81bdf9994a465eff0cf9e370fa6cd954bd6b7f379237423e7e0c.scope - libcontainer container 2c1645c1151a81bdf9994a465eff0cf9e370fa6cd954bd6b7f379237423e7e0c. Aug 13 00:22:22.627074 containerd[1728]: time="2025-08-13T00:22:22.626605468Z" level=info msg="StartContainer for \"82d4a6da18ca19908f85ffa4cf4587daab69546d9de75763db24fd0135d6389a\" returns successfully" Aug 13 00:22:22.637099 systemd[1]: Started cri-containerd-77914279598c481804e077a332df2a784859bdbf2f69946befceed60a0ed6ea3.scope - libcontainer container 77914279598c481804e077a332df2a784859bdbf2f69946befceed60a0ed6ea3. Aug 13 00:22:22.677954 containerd[1728]: time="2025-08-13T00:22:22.677839521Z" level=info msg="StartContainer for \"2c1645c1151a81bdf9994a465eff0cf9e370fa6cd954bd6b7f379237423e7e0c\" returns successfully" Aug 13 00:22:22.716893 containerd[1728]: time="2025-08-13T00:22:22.716465711Z" level=info msg="StartContainer for \"77914279598c481804e077a332df2a784859bdbf2f69946befceed60a0ed6ea3\" returns successfully" Aug 13 00:22:23.059242 kubelet[2816]: I0813 00:22:23.059157 2816 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:24.837919 kubelet[2816]: E0813 00:22:24.837873 2816 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-a-c1c2bc5336\" not found" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:24.938341 kubelet[2816]: I0813 00:22:24.938299 2816 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:25.829109 kubelet[2816]: I0813 00:22:25.829045 2816 apiserver.go:52] "Watching apiserver" Aug 13 00:22:25.836596 kubelet[2816]: I0813 00:22:25.836499 2816 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:22:27.156539 systemd[1]: Reloading requested from client PID 3091 ('systemctl') (unit session-9.scope)... Aug 13 00:22:27.156555 systemd[1]: Reloading... Aug 13 00:22:27.239993 zram_generator::config[3128]: No configuration found. Aug 13 00:22:27.353554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:22:27.444001 systemd[1]: Reloading finished in 287 ms. Aug 13 00:22:27.483334 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:27.498705 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:22:27.498939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:27.499027 systemd[1]: kubelet.service: Consumed 1.477s CPU time, 127.3M memory peak, 0B memory swap peak. Aug 13 00:22:27.506014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:22:27.601140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:22:27.606779 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:22:27.742874 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:28.021611 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:22:28.021611 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:22:28.021611 kubelet[3195]: I0813 00:22:27.743434 3195 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:22:28.021611 kubelet[3195]: I0813 00:22:27.749011 3195 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:22:28.021611 kubelet[3195]: I0813 00:22:27.749035 3195 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:22:28.021611 kubelet[3195]: I0813 00:22:27.749238 3195 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:22:28.023668 kubelet[3195]: I0813 00:22:28.022996 3195 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:22:28.025935 kubelet[3195]: I0813 00:22:28.025900 3195 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:22:28.029395 kubelet[3195]: E0813 00:22:28.029362 3195 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:22:28.029395 kubelet[3195]: I0813 00:22:28.029395 3195 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:22:28.037133 kubelet[3195]: I0813 00:22:28.037045 3195 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:22:28.037294 kubelet[3195]: I0813 00:22:28.037164 3195 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:22:28.037294 kubelet[3195]: I0813 00:22:28.037262 3195 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:22:28.037448 kubelet[3195]: I0813 00:22:28.037282 3195 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-a-c1c2bc5336","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:22:28.037525 kubelet[3195]: I0813 00:22:28.037450 3195 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:22:28.037525 kubelet[3195]: I0813 00:22:28.037460 3195 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:22:28.037525 kubelet[3195]: I0813 00:22:28.037492 3195 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:28.038227 kubelet[3195]: I0813 00:22:28.037586 3195 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:22:28.038227 kubelet[3195]: I0813 00:22:28.037599 3195 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:22:28.038227 kubelet[3195]: I0813 00:22:28.037616 3195 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:22:28.038227 kubelet[3195]: I0813 00:22:28.037631 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:22:28.040813 kubelet[3195]: I0813 00:22:28.040784 3195 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:22:28.041437 kubelet[3195]: I0813 00:22:28.041420 3195 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:22:28.042047 kubelet[3195]: I0813 00:22:28.041984 3195 server.go:1274] "Started kubelet" Aug 13 00:22:28.043477 kubelet[3195]: I0813 00:22:28.043328 3195 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:22:28.044751 kubelet[3195]: I0813 00:22:28.043879 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:22:28.045734 kubelet[3195]: I0813 00:22:28.045429 3195 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:22:28.045734 kubelet[3195]: I0813 00:22:28.045533 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:22:28.045734 kubelet[3195]: I0813 00:22:28.044569 3195 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:22:28.048248 kubelet[3195]: I0813 00:22:28.047893 3195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:22:28.050103 kubelet[3195]: I0813 00:22:28.049502 3195 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:22:28.050103 kubelet[3195]: E0813 00:22:28.049903 3195 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-a-c1c2bc5336\" not found" Aug 13 00:22:28.050466 kubelet[3195]: I0813 00:22:28.050421 3195 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:22:28.050582 kubelet[3195]: I0813 00:22:28.050562 3195 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:22:28.052651 kubelet[3195]: I0813 00:22:28.052605 3195 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:22:28.052835 kubelet[3195]: I0813 00:22:28.052767 3195 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:22:28.054725 kubelet[3195]: I0813 00:22:28.054700 3195 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:22:28.098457 kubelet[3195]: I0813 00:22:28.098372 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:22:28.100250 kubelet[3195]: I0813 00:22:28.100128 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:22:28.100250 kubelet[3195]: I0813 00:22:28.100173 3195 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:22:28.100250 kubelet[3195]: I0813 00:22:28.100191 3195 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:22:28.100507 kubelet[3195]: E0813 00:22:28.100335 3195 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:22:28.131364 kubelet[3195]: I0813 00:22:28.131331 3195 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:22:28.131364 kubelet[3195]: I0813 00:22:28.131352 3195 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:22:28.131364 kubelet[3195]: I0813 00:22:28.131374 3195 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:22:28.131531 kubelet[3195]: I0813 00:22:28.131523 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:22:28.131562 kubelet[3195]: I0813 00:22:28.131533 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:22:28.131562 kubelet[3195]: I0813 00:22:28.131553 3195 policy_none.go:49] "None policy: Start" Aug 13 00:22:28.132669 kubelet[3195]: I0813 00:22:28.132268 3195 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:22:28.132669 kubelet[3195]: I0813 00:22:28.132294 3195 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:22:28.132669 kubelet[3195]: I0813 00:22:28.132439 3195 state_mem.go:75] "Updated machine memory state" Aug 13 00:22:28.138351 kubelet[3195]: I0813 00:22:28.137771 3195 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:22:28.138351 kubelet[3195]: I0813 00:22:28.137934 3195 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:22:28.138351 kubelet[3195]: I0813 00:22:28.137947 3195 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:22:28.138351 kubelet[3195]: I0813 00:22:28.138264 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:22:28.212341 kubelet[3195]: W0813 00:22:28.212303 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:28.215783 kubelet[3195]: W0813 00:22:28.215364 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:28.215783 kubelet[3195]: W0813 00:22:28.215368 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:28.243312 kubelet[3195]: I0813 00:22:28.243123 3195 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.252958 kubelet[3195]: I0813 00:22:28.252654 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4e661494e6752f4d82955fba22900c7-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-a-c1c2bc5336\" (UID: \"d4e661494e6752f4d82955fba22900c7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.252958 kubelet[3195]: I0813 00:22:28.252696 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4e661494e6752f4d82955fba22900c7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-a-c1c2bc5336\" (UID: \"d4e661494e6752f4d82955fba22900c7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.252958 kubelet[3195]: I0813 00:22:28.252716 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.252958 kubelet[3195]: I0813 00:22:28.252736 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.252958 kubelet[3195]: I0813 00:22:28.252751 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/876898abff0bc254b6bf09c588f8144e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-a-c1c2bc5336\" (UID: \"876898abff0bc254b6bf09c588f8144e\") " pod="kube-system/kube-scheduler-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.253211 kubelet[3195]: I0813 00:22:28.252766 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4e661494e6752f4d82955fba22900c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-a-c1c2bc5336\" (UID: \"d4e661494e6752f4d82955fba22900c7\") " pod="kube-system/kube-apiserver-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.253211 kubelet[3195]: I0813 00:22:28.252781 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.253211 kubelet[3195]: I0813 00:22:28.252795 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.253211 kubelet[3195]: I0813 00:22:28.252812 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7f81d3be4450e3733609e829ae2564e1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-a-c1c2bc5336\" (UID: \"7f81d3be4450e3733609e829ae2564e1\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.257698 kubelet[3195]: I0813 00:22:28.257500 3195 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:28.257698 kubelet[3195]: I0813 00:22:28.257595 3195 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:29.042245 kubelet[3195]: I0813 00:22:29.042190 3195 apiserver.go:52] "Watching apiserver" Aug 13 00:22:29.051216 kubelet[3195]: I0813 00:22:29.051160 3195 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:22:29.126018 kubelet[3195]: W0813 00:22:29.125799 3195 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:22:29.126018 kubelet[3195]: E0813 00:22:29.125861 3195 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.5-a-c1c2bc5336\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.5-a-c1c2bc5336" Aug 13 00:22:29.151334 kubelet[3195]: I0813 00:22:29.151257 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-a-c1c2bc5336" podStartSLOduration=1.151238803 podStartE2EDuration="1.151238803s" podCreationTimestamp="2025-08-13 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:29.139829741 +0000 UTC m=+1.529725352" watchObservedRunningTime="2025-08-13 00:22:29.151238803 +0000 UTC m=+1.541134414" Aug 13 00:22:29.162073 kubelet[3195]: I0813 00:22:29.161781 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-a-c1c2bc5336" podStartSLOduration=1.161761703 podStartE2EDuration="1.161761703s" podCreationTimestamp="2025-08-13 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:29.151176923 +0000 UTC m=+1.541072534" watchObservedRunningTime="2025-08-13 00:22:29.161761703 +0000 UTC m=+1.551657274" Aug 13 00:22:29.162073 kubelet[3195]: I0813 00:22:29.161897 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-a-c1c2bc5336" podStartSLOduration=1.161893383 podStartE2EDuration="1.161893383s" podCreationTimestamp="2025-08-13 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:29.161538783 +0000 UTC m=+1.551434394" watchObservedRunningTime="2025-08-13 00:22:29.161893383 +0000 UTC m=+1.551788994" Aug 13 00:22:33.412101 kubelet[3195]: I0813 00:22:33.412009 3195 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:22:33.412737 containerd[1728]: time="2025-08-13T00:22:33.412273379Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:22:33.413063 kubelet[3195]: I0813 00:22:33.412747 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:22:34.235340 systemd[1]: Created slice kubepods-besteffort-pod86464076_128a_446b_a975_ba7684970b77.slice - libcontainer container kubepods-besteffort-pod86464076_128a_446b_a975_ba7684970b77.slice. Aug 13 00:22:34.289935 kubelet[3195]: I0813 00:22:34.289884 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86464076-128a-446b-a975-ba7684970b77-kube-proxy\") pod \"kube-proxy-lkhhh\" (UID: \"86464076-128a-446b-a975-ba7684970b77\") " pod="kube-system/kube-proxy-lkhhh" Aug 13 00:22:34.289935 kubelet[3195]: I0813 00:22:34.289928 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86464076-128a-446b-a975-ba7684970b77-lib-modules\") pod \"kube-proxy-lkhhh\" (UID: \"86464076-128a-446b-a975-ba7684970b77\") " pod="kube-system/kube-proxy-lkhhh" Aug 13 00:22:34.290122 kubelet[3195]: I0813 00:22:34.289949 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86464076-128a-446b-a975-ba7684970b77-xtables-lock\") pod \"kube-proxy-lkhhh\" (UID: \"86464076-128a-446b-a975-ba7684970b77\") " pod="kube-system/kube-proxy-lkhhh" Aug 13 00:22:34.290122 kubelet[3195]: I0813 00:22:34.289966 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9xrv\" (UniqueName: \"kubernetes.io/projected/86464076-128a-446b-a975-ba7684970b77-kube-api-access-k9xrv\") pod \"kube-proxy-lkhhh\" (UID: \"86464076-128a-446b-a975-ba7684970b77\") " pod="kube-system/kube-proxy-lkhhh" Aug 13 00:22:34.521133 systemd[1]: Created slice kubepods-besteffort-pod61736ef9_5872_4d7e_a4cf_cab66aaaa2f1.slice - libcontainer container kubepods-besteffort-pod61736ef9_5872_4d7e_a4cf_cab66aaaa2f1.slice. Aug 13 00:22:34.545217 containerd[1728]: time="2025-08-13T00:22:34.545008161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lkhhh,Uid:86464076-128a-446b-a975-ba7684970b77,Namespace:kube-system,Attempt:0,}" Aug 13 00:22:34.591904 kubelet[3195]: I0813 00:22:34.591784 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61736ef9-5872-4d7e-a4cf-cab66aaaa2f1-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-2v2kq\" (UID: \"61736ef9-5872-4d7e-a4cf-cab66aaaa2f1\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-2v2kq" Aug 13 00:22:34.591904 kubelet[3195]: I0813 00:22:34.591830 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbtr\" (UniqueName: \"kubernetes.io/projected/61736ef9-5872-4d7e-a4cf-cab66aaaa2f1-kube-api-access-9hbtr\") pod \"tigera-operator-5bf8dfcb4-2v2kq\" (UID: \"61736ef9-5872-4d7e-a4cf-cab66aaaa2f1\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-2v2kq" Aug 13 00:22:34.593864 containerd[1728]: time="2025-08-13T00:22:34.593259574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:34.593864 containerd[1728]: time="2025-08-13T00:22:34.593757055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:34.593864 containerd[1728]: time="2025-08-13T00:22:34.593772255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.594186 containerd[1728]: time="2025-08-13T00:22:34.594134216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.614828 systemd[1]: Started cri-containerd-76ad96830a9a048abf28ffc5be7834da54f73fce7f6629490d216a124d24104f.scope - libcontainer container 76ad96830a9a048abf28ffc5be7834da54f73fce7f6629490d216a124d24104f. Aug 13 00:22:34.638827 containerd[1728]: time="2025-08-13T00:22:34.638781822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lkhhh,Uid:86464076-128a-446b-a975-ba7684970b77,Namespace:kube-system,Attempt:0,} returns sandbox id \"76ad96830a9a048abf28ffc5be7834da54f73fce7f6629490d216a124d24104f\"" Aug 13 00:22:34.642934 containerd[1728]: time="2025-08-13T00:22:34.642883430Z" level=info msg="CreateContainer within sandbox \"76ad96830a9a048abf28ffc5be7834da54f73fce7f6629490d216a124d24104f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:22:34.673316 containerd[1728]: time="2025-08-13T00:22:34.673271448Z" level=info msg="CreateContainer within sandbox \"76ad96830a9a048abf28ffc5be7834da54f73fce7f6629490d216a124d24104f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d9579a36d7f5866874a03b0a6b333f350599e5e72afab09bd336d32ce0d68df\"" Aug 13 00:22:34.675303 containerd[1728]: time="2025-08-13T00:22:34.674255330Z" level=info msg="StartContainer for \"7d9579a36d7f5866874a03b0a6b333f350599e5e72afab09bd336d32ce0d68df\"" Aug 13 00:22:34.697824 systemd[1]: Started cri-containerd-7d9579a36d7f5866874a03b0a6b333f350599e5e72afab09bd336d32ce0d68df.scope - libcontainer container 7d9579a36d7f5866874a03b0a6b333f350599e5e72afab09bd336d32ce0d68df. Aug 13 00:22:34.730336 containerd[1728]: time="2025-08-13T00:22:34.730286958Z" level=info msg="StartContainer for \"7d9579a36d7f5866874a03b0a6b333f350599e5e72afab09bd336d32ce0d68df\" returns successfully" Aug 13 00:22:34.825690 containerd[1728]: time="2025-08-13T00:22:34.825564662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-2v2kq,Uid:61736ef9-5872-4d7e-a4cf-cab66aaaa2f1,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:22:34.862249 containerd[1728]: time="2025-08-13T00:22:34.862134132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:34.862249 containerd[1728]: time="2025-08-13T00:22:34.862181852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:34.862249 containerd[1728]: time="2025-08-13T00:22:34.862201532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.862782 containerd[1728]: time="2025-08-13T00:22:34.862534733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:34.881814 systemd[1]: Started cri-containerd-d2ec5130d72141a664807b9c3515b7718471096ca3b2a94ec3c4c9d2ba3a5996.scope - libcontainer container d2ec5130d72141a664807b9c3515b7718471096ca3b2a94ec3c4c9d2ba3a5996. Aug 13 00:22:34.914370 containerd[1728]: time="2025-08-13T00:22:34.914334592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-2v2kq,Uid:61736ef9-5872-4d7e-a4cf-cab66aaaa2f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d2ec5130d72141a664807b9c3515b7718471096ca3b2a94ec3c4c9d2ba3a5996\"" Aug 13 00:22:34.916567 containerd[1728]: time="2025-08-13T00:22:34.916338116Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:22:36.560414 kubelet[3195]: I0813 00:22:36.560115 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lkhhh" podStartSLOduration=2.560096882 podStartE2EDuration="2.560096882s" podCreationTimestamp="2025-08-13 00:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:22:35.14141879 +0000 UTC m=+7.531314401" watchObservedRunningTime="2025-08-13 00:22:36.560096882 +0000 UTC m=+8.949992453" Aug 13 00:22:36.630914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502091793.mount: Deactivated successfully. Aug 13 00:22:37.123492 containerd[1728]: time="2025-08-13T00:22:37.123454887Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:37.126086 containerd[1728]: time="2025-08-13T00:22:37.125898692Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:22:37.129219 containerd[1728]: time="2025-08-13T00:22:37.128886978Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:37.133309 containerd[1728]: time="2025-08-13T00:22:37.133274146Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:37.134593 containerd[1728]: time="2025-08-13T00:22:37.134318268Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.217550071s" Aug 13 00:22:37.134593 containerd[1728]: time="2025-08-13T00:22:37.134356828Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:22:37.140077 containerd[1728]: time="2025-08-13T00:22:37.140031199Z" level=info msg="CreateContainer within sandbox \"d2ec5130d72141a664807b9c3515b7718471096ca3b2a94ec3c4c9d2ba3a5996\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:22:37.172381 containerd[1728]: time="2025-08-13T00:22:37.172319901Z" level=info msg="CreateContainer within sandbox \"d2ec5130d72141a664807b9c3515b7718471096ca3b2a94ec3c4c9d2ba3a5996\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ac89bba78f29c5b91ef6bc62b3e5baabcd63b12335e1c7d3a518707d395f01bc\"" Aug 13 00:22:37.173669 containerd[1728]: time="2025-08-13T00:22:37.172856703Z" level=info msg="StartContainer for \"ac89bba78f29c5b91ef6bc62b3e5baabcd63b12335e1c7d3a518707d395f01bc\"" Aug 13 00:22:37.198817 systemd[1]: Started cri-containerd-ac89bba78f29c5b91ef6bc62b3e5baabcd63b12335e1c7d3a518707d395f01bc.scope - libcontainer container ac89bba78f29c5b91ef6bc62b3e5baabcd63b12335e1c7d3a518707d395f01bc. Aug 13 00:22:37.223534 containerd[1728]: time="2025-08-13T00:22:37.223483040Z" level=info msg="StartContainer for \"ac89bba78f29c5b91ef6bc62b3e5baabcd63b12335e1c7d3a518707d395f01bc\" returns successfully" Aug 13 00:22:38.146719 kubelet[3195]: I0813 00:22:38.146557 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-2v2kq" podStartSLOduration=1.924659579 podStartE2EDuration="4.146540978s" podCreationTimestamp="2025-08-13 00:22:34 +0000 UTC" firstStartedPulling="2025-08-13 00:22:34.915934036 +0000 UTC m=+7.305829647" lastFinishedPulling="2025-08-13 00:22:37.137815435 +0000 UTC m=+9.527711046" observedRunningTime="2025-08-13 00:22:38.146296137 +0000 UTC m=+10.536191788" watchObservedRunningTime="2025-08-13 00:22:38.146540978 +0000 UTC m=+10.536436589" Aug 13 00:22:43.191330 sudo[2253]: pam_unix(sudo:session): session closed for user root Aug 13 00:22:43.297050 sshd[2248]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:43.302179 systemd-logind[1707]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:22:43.303247 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:22:43.303447 systemd[1]: session-9.scope: Consumed 7.701s CPU time, 149.2M memory peak, 0B memory swap peak. Aug 13 00:22:43.304575 systemd[1]: sshd@6-10.200.20.42:22-10.200.16.10:35442.service: Deactivated successfully. Aug 13 00:22:52.726744 systemd[1]: Created slice kubepods-besteffort-pod3ed7af2e_ac7d_4531_af99_fe71e69daee6.slice - libcontainer container kubepods-besteffort-pod3ed7af2e_ac7d_4531_af99_fe71e69daee6.slice. Aug 13 00:22:52.805802 kubelet[3195]: I0813 00:22:52.805624 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg4wt\" (UniqueName: \"kubernetes.io/projected/3ed7af2e-ac7d-4531-af99-fe71e69daee6-kube-api-access-bg4wt\") pod \"calico-typha-84cdcffc59-xsmws\" (UID: \"3ed7af2e-ac7d-4531-af99-fe71e69daee6\") " pod="calico-system/calico-typha-84cdcffc59-xsmws" Aug 13 00:22:52.805802 kubelet[3195]: I0813 00:22:52.805719 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ed7af2e-ac7d-4531-af99-fe71e69daee6-tigera-ca-bundle\") pod \"calico-typha-84cdcffc59-xsmws\" (UID: \"3ed7af2e-ac7d-4531-af99-fe71e69daee6\") " pod="calico-system/calico-typha-84cdcffc59-xsmws" Aug 13 00:22:52.805802 kubelet[3195]: I0813 00:22:52.805743 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ed7af2e-ac7d-4531-af99-fe71e69daee6-typha-certs\") pod \"calico-typha-84cdcffc59-xsmws\" (UID: \"3ed7af2e-ac7d-4531-af99-fe71e69daee6\") " pod="calico-system/calico-typha-84cdcffc59-xsmws" Aug 13 00:22:52.882304 systemd[1]: Created slice kubepods-besteffort-podfbf2dd0b_0729_4b4e_9bba_c749415a3d72.slice - libcontainer container kubepods-besteffort-podfbf2dd0b_0729_4b4e_9bba_c749415a3d72.slice. Aug 13 00:22:52.907424 kubelet[3195]: I0813 00:22:52.906471 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-cni-log-dir\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907424 kubelet[3195]: I0813 00:22:52.906513 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-xtables-lock\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907424 kubelet[3195]: I0813 00:22:52.906532 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-node-certs\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907424 kubelet[3195]: I0813 00:22:52.906547 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-flexvol-driver-host\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907424 kubelet[3195]: I0813 00:22:52.906564 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5tx2\" (UniqueName: \"kubernetes.io/projected/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-kube-api-access-n5tx2\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907660 kubelet[3195]: I0813 00:22:52.906599 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-cni-net-dir\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907660 kubelet[3195]: I0813 00:22:52.906613 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-tigera-ca-bundle\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907660 kubelet[3195]: I0813 00:22:52.906630 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-policysync\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907660 kubelet[3195]: I0813 00:22:52.906668 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-var-run-calico\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907660 kubelet[3195]: I0813 00:22:52.906684 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-cni-bin-dir\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907773 kubelet[3195]: I0813 00:22:52.906698 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-lib-modules\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:52.907773 kubelet[3195]: I0813 00:22:52.906713 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fbf2dd0b-0729-4b4e-9bba-c749415a3d72-var-lib-calico\") pod \"calico-node-lg6q7\" (UID: \"fbf2dd0b-0729-4b4e-9bba-c749415a3d72\") " pod="calico-system/calico-node-lg6q7" Aug 13 00:22:53.006228 kubelet[3195]: E0813 00:22:53.005500 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6v4g" podUID="f75dc80d-3a70-4464-bb0d-78154b4f7aab" Aug 13 00:22:53.010762 kubelet[3195]: E0813 00:22:53.010711 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.010762 kubelet[3195]: W0813 00:22:53.010748 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.010762 kubelet[3195]: E0813 00:22:53.010771 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.011888 kubelet[3195]: E0813 00:22:53.011857 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.011888 kubelet[3195]: W0813 00:22:53.011880 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.011888 kubelet[3195]: E0813 00:22:53.011896 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.013739 kubelet[3195]: E0813 00:22:53.013701 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.013739 kubelet[3195]: W0813 00:22:53.013724 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.014790 kubelet[3195]: E0813 00:22:53.013802 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.014790 kubelet[3195]: E0813 00:22:53.014787 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.014855 kubelet[3195]: W0813 00:22:53.014801 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.014928 kubelet[3195]: E0813 00:22:53.014911 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.015003 kubelet[3195]: E0813 00:22:53.014979 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.015003 kubelet[3195]: W0813 00:22:53.014995 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.015003 kubelet[3195]: E0813 00:22:53.015027 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.015622 kubelet[3195]: E0813 00:22:53.015598 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.015622 kubelet[3195]: W0813 00:22:53.015615 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.015805 kubelet[3195]: E0813 00:22:53.015755 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.018661 kubelet[3195]: E0813 00:22:53.017732 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.018661 kubelet[3195]: W0813 00:22:53.017754 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.018661 kubelet[3195]: E0813 00:22:53.017842 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.018661 kubelet[3195]: E0813 00:22:53.018016 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.018661 kubelet[3195]: W0813 00:22:53.018027 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.019928 kubelet[3195]: E0813 00:22:53.019702 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.019928 kubelet[3195]: E0813 00:22:53.019907 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.019928 kubelet[3195]: W0813 00:22:53.019920 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.021334 kubelet[3195]: E0813 00:22:53.020040 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.021334 kubelet[3195]: E0813 00:22:53.020154 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.021334 kubelet[3195]: W0813 00:22:53.020162 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.021334 kubelet[3195]: E0813 00:22:53.020225 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.021334 kubelet[3195]: E0813 00:22:53.020759 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.021334 kubelet[3195]: W0813 00:22:53.020772 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.021334 kubelet[3195]: E0813 00:22:53.021225 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.022078 kubelet[3195]: E0813 00:22:53.021489 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.022078 kubelet[3195]: W0813 00:22:53.021510 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.022160 kubelet[3195]: E0813 00:22:53.021889 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.022943 kubelet[3195]: E0813 00:22:53.022917 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.022943 kubelet[3195]: W0813 00:22:53.022935 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.023076 kubelet[3195]: E0813 00:22:53.023025 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.024794 kubelet[3195]: E0813 00:22:53.023147 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.024794 kubelet[3195]: W0813 00:22:53.023162 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.024794 kubelet[3195]: E0813 00:22:53.023341 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.024794 kubelet[3195]: W0813 00:22:53.023349 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.024794 kubelet[3195]: E0813 00:22:53.023358 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.024794 kubelet[3195]: E0813 00:22:53.023507 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.024794 kubelet[3195]: W0813 00:22:53.023514 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.024794 kubelet[3195]: E0813 00:22:53.023522 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.024794 kubelet[3195]: E0813 00:22:53.023537 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.025759 kubelet[3195]: E0813 00:22:53.025732 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.025759 kubelet[3195]: W0813 00:22:53.025751 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.025759 kubelet[3195]: E0813 00:22:53.025762 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.026764 kubelet[3195]: E0813 00:22:53.026713 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.026764 kubelet[3195]: W0813 00:22:53.026732 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.026764 kubelet[3195]: E0813 00:22:53.026744 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.032320 containerd[1728]: time="2025-08-13T00:22:53.032261989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84cdcffc59-xsmws,Uid:3ed7af2e-ac7d-4531-af99-fe71e69daee6,Namespace:calico-system,Attempt:0,}" Aug 13 00:22:53.050452 kubelet[3195]: E0813 00:22:53.050303 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.051283 kubelet[3195]: W0813 00:22:53.050616 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.051283 kubelet[3195]: E0813 00:22:53.050662 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.086754 containerd[1728]: time="2025-08-13T00:22:53.086556469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:53.087031 containerd[1728]: time="2025-08-13T00:22:53.086878669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:53.088448 containerd[1728]: time="2025-08-13T00:22:53.086961269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:53.088448 containerd[1728]: time="2025-08-13T00:22:53.088371233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:53.097602 kubelet[3195]: E0813 00:22:53.097515 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.097818 kubelet[3195]: W0813 00:22:53.097746 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.097818 kubelet[3195]: E0813 00:22:53.097775 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.099539 kubelet[3195]: E0813 00:22:53.099388 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.099539 kubelet[3195]: W0813 00:22:53.099413 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.099539 kubelet[3195]: E0813 00:22:53.099428 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.100331 kubelet[3195]: E0813 00:22:53.100162 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.100331 kubelet[3195]: W0813 00:22:53.100177 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.100331 kubelet[3195]: E0813 00:22:53.100249 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.103237 kubelet[3195]: E0813 00:22:53.101734 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.103237 kubelet[3195]: W0813 00:22:53.101749 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.103237 kubelet[3195]: E0813 00:22:53.101775 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.103535 kubelet[3195]: E0813 00:22:53.103329 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.103535 kubelet[3195]: W0813 00:22:53.103340 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.103535 kubelet[3195]: E0813 00:22:53.103351 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.104047 kubelet[3195]: E0813 00:22:53.103892 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.104047 kubelet[3195]: W0813 00:22:53.103907 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.104047 kubelet[3195]: E0813 00:22:53.103939 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.104861 kubelet[3195]: E0813 00:22:53.104477 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.104861 kubelet[3195]: W0813 00:22:53.104491 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.104861 kubelet[3195]: E0813 00:22:53.104503 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.105402 kubelet[3195]: E0813 00:22:53.105040 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.105402 kubelet[3195]: W0813 00:22:53.105054 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.105402 kubelet[3195]: E0813 00:22:53.105107 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.106489 kubelet[3195]: E0813 00:22:53.106173 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.106489 kubelet[3195]: W0813 00:22:53.106188 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.106489 kubelet[3195]: E0813 00:22:53.106213 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.107902 kubelet[3195]: E0813 00:22:53.107857 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.107902 kubelet[3195]: W0813 00:22:53.107873 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.109400 kubelet[3195]: E0813 00:22:53.107886 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.109784 kubelet[3195]: E0813 00:22:53.109696 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.109784 kubelet[3195]: W0813 00:22:53.109722 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.109784 kubelet[3195]: E0813 00:22:53.109735 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.110921 kubelet[3195]: E0813 00:22:53.110591 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.110921 kubelet[3195]: W0813 00:22:53.110603 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.110921 kubelet[3195]: E0813 00:22:53.110616 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.111566 kubelet[3195]: E0813 00:22:53.111367 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.111566 kubelet[3195]: W0813 00:22:53.111381 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.111566 kubelet[3195]: E0813 00:22:53.111395 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.113357 kubelet[3195]: E0813 00:22:53.112869 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.113357 kubelet[3195]: W0813 00:22:53.112958 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.113357 kubelet[3195]: E0813 00:22:53.112973 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.115755 kubelet[3195]: E0813 00:22:53.114079 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.115755 kubelet[3195]: W0813 00:22:53.114102 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.115755 kubelet[3195]: E0813 00:22:53.114117 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.115755 kubelet[3195]: E0813 00:22:53.114712 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.115755 kubelet[3195]: W0813 00:22:53.114726 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.115755 kubelet[3195]: E0813 00:22:53.114738 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.115755 kubelet[3195]: E0813 00:22:53.115144 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.115755 kubelet[3195]: W0813 00:22:53.115156 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.115755 kubelet[3195]: E0813 00:22:53.115168 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.116001 kubelet[3195]: E0813 00:22:53.115805 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.116001 kubelet[3195]: W0813 00:22:53.115818 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.116001 kubelet[3195]: E0813 00:22:53.115830 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.116001 kubelet[3195]: E0813 00:22:53.115974 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.116001 kubelet[3195]: W0813 00:22:53.115981 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.116001 kubelet[3195]: E0813 00:22:53.115990 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.116122 kubelet[3195]: E0813 00:22:53.116106 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.116122 kubelet[3195]: W0813 00:22:53.116113 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.116122 kubelet[3195]: E0813 00:22:53.116120 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.117108 systemd[1]: Started cri-containerd-55c1f965682396eb6d263faebd933f1c9b3450e171549e5eb28070e922a4a0a3.scope - libcontainer container 55c1f965682396eb6d263faebd933f1c9b3450e171549e5eb28070e922a4a0a3. Aug 13 00:22:53.117608 kubelet[3195]: E0813 00:22:53.117380 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.117608 kubelet[3195]: W0813 00:22:53.117393 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.117608 kubelet[3195]: E0813 00:22:53.117406 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.117608 kubelet[3195]: I0813 00:22:53.117436 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f75dc80d-3a70-4464-bb0d-78154b4f7aab-socket-dir\") pod \"csi-node-driver-l6v4g\" (UID: \"f75dc80d-3a70-4464-bb0d-78154b4f7aab\") " pod="calico-system/csi-node-driver-l6v4g" Aug 13 00:22:53.117608 kubelet[3195]: E0813 00:22:53.118076 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.117608 kubelet[3195]: W0813 00:22:53.118091 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.117608 kubelet[3195]: E0813 00:22:53.118128 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.117608 kubelet[3195]: I0813 00:22:53.118147 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f75dc80d-3a70-4464-bb0d-78154b4f7aab-varrun\") pod \"csi-node-driver-l6v4g\" (UID: \"f75dc80d-3a70-4464-bb0d-78154b4f7aab\") " pod="calico-system/csi-node-driver-l6v4g" Aug 13 00:22:53.117608 kubelet[3195]: E0813 00:22:53.118687 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.119523 kubelet[3195]: W0813 00:22:53.118703 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.119523 kubelet[3195]: E0813 00:22:53.118715 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.119523 kubelet[3195]: I0813 00:22:53.118840 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f75dc80d-3a70-4464-bb0d-78154b4f7aab-registration-dir\") pod \"csi-node-driver-l6v4g\" (UID: \"f75dc80d-3a70-4464-bb0d-78154b4f7aab\") " pod="calico-system/csi-node-driver-l6v4g" Aug 13 00:22:53.119523 kubelet[3195]: E0813 00:22:53.119408 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.119611 kubelet[3195]: W0813 00:22:53.119531 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.119611 kubelet[3195]: E0813 00:22:53.119554 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.119611 kubelet[3195]: I0813 00:22:53.119571 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682d5\" (UniqueName: \"kubernetes.io/projected/f75dc80d-3a70-4464-bb0d-78154b4f7aab-kube-api-access-682d5\") pod \"csi-node-driver-l6v4g\" (UID: \"f75dc80d-3a70-4464-bb0d-78154b4f7aab\") " pod="calico-system/csi-node-driver-l6v4g" Aug 13 00:22:53.120225 kubelet[3195]: E0813 00:22:53.120152 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.120225 kubelet[3195]: W0813 00:22:53.120217 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.120427 kubelet[3195]: E0813 00:22:53.120239 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.120427 kubelet[3195]: I0813 00:22:53.120370 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f75dc80d-3a70-4464-bb0d-78154b4f7aab-kubelet-dir\") pod \"csi-node-driver-l6v4g\" (UID: \"f75dc80d-3a70-4464-bb0d-78154b4f7aab\") " pod="calico-system/csi-node-driver-l6v4g" Aug 13 00:22:53.120958 kubelet[3195]: E0813 00:22:53.120843 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.120958 kubelet[3195]: W0813 00:22:53.120859 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.120958 kubelet[3195]: E0813 00:22:53.120879 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.121351 kubelet[3195]: E0813 00:22:53.121306 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.121586 kubelet[3195]: W0813 00:22:53.121429 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.121586 kubelet[3195]: E0813 00:22:53.121456 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.122850 kubelet[3195]: E0813 00:22:53.122786 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.122850 kubelet[3195]: W0813 00:22:53.122826 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.122850 kubelet[3195]: E0813 00:22:53.122849 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.123076 kubelet[3195]: E0813 00:22:53.123056 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.123076 kubelet[3195]: W0813 00:22:53.123073 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.123155 kubelet[3195]: E0813 00:22:53.123104 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.123394 kubelet[3195]: E0813 00:22:53.123372 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.123394 kubelet[3195]: W0813 00:22:53.123388 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.123512 kubelet[3195]: E0813 00:22:53.123406 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.123809 kubelet[3195]: E0813 00:22:53.123787 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.123809 kubelet[3195]: W0813 00:22:53.123804 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.123958 kubelet[3195]: E0813 00:22:53.123917 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.124416 kubelet[3195]: E0813 00:22:53.124295 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.124416 kubelet[3195]: W0813 00:22:53.124413 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.124556 kubelet[3195]: E0813 00:22:53.124428 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.125081 kubelet[3195]: E0813 00:22:53.124956 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.125081 kubelet[3195]: W0813 00:22:53.125080 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.125171 kubelet[3195]: E0813 00:22:53.125097 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.125663 kubelet[3195]: E0813 00:22:53.125624 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.125735 kubelet[3195]: W0813 00:22:53.125684 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.125735 kubelet[3195]: E0813 00:22:53.125700 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.126263 kubelet[3195]: E0813 00:22:53.126219 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.126313 kubelet[3195]: W0813 00:22:53.126277 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.126313 kubelet[3195]: E0813 00:22:53.126291 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.190010 containerd[1728]: time="2025-08-13T00:22:53.189963776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lg6q7,Uid:fbf2dd0b-0729-4b4e-9bba-c749415a3d72,Namespace:calico-system,Attempt:0,}" Aug 13 00:22:53.196949 containerd[1728]: time="2025-08-13T00:22:53.196903432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84cdcffc59-xsmws,Uid:3ed7af2e-ac7d-4531-af99-fe71e69daee6,Namespace:calico-system,Attempt:0,} returns sandbox id \"55c1f965682396eb6d263faebd933f1c9b3450e171549e5eb28070e922a4a0a3\"" Aug 13 00:22:53.199032 containerd[1728]: time="2025-08-13T00:22:53.198889276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:22:53.222252 kubelet[3195]: E0813 00:22:53.222139 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.222252 kubelet[3195]: W0813 00:22:53.222174 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.222252 kubelet[3195]: E0813 00:22:53.222208 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.222793 kubelet[3195]: E0813 00:22:53.222762 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.226192 kubelet[3195]: W0813 00:22:53.222844 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.226192 kubelet[3195]: E0813 00:22:53.222871 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.226763 kubelet[3195]: E0813 00:22:53.226739 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.226763 kubelet[3195]: W0813 00:22:53.226761 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.229984 kubelet[3195]: E0813 00:22:53.229714 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.229984 kubelet[3195]: E0813 00:22:53.229810 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.229984 kubelet[3195]: W0813 00:22:53.229820 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.229984 kubelet[3195]: E0813 00:22:53.229839 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.230400 kubelet[3195]: E0813 00:22:53.230375 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.230400 kubelet[3195]: W0813 00:22:53.230395 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.231442 kubelet[3195]: E0813 00:22:53.231410 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.231794 kubelet[3195]: E0813 00:22:53.231770 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.231794 kubelet[3195]: W0813 00:22:53.231788 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.231794 kubelet[3195]: E0813 00:22:53.231809 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.232309 kubelet[3195]: E0813 00:22:53.232067 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.232309 kubelet[3195]: W0813 00:22:53.232082 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.232309 kubelet[3195]: E0813 00:22:53.232105 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.232602 kubelet[3195]: E0813 00:22:53.232323 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.232602 kubelet[3195]: W0813 00:22:53.232336 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.232602 kubelet[3195]: E0813 00:22:53.232355 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.234213 kubelet[3195]: E0813 00:22:53.234181 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.234213 kubelet[3195]: W0813 00:22:53.234205 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.234213 kubelet[3195]: E0813 00:22:53.234228 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.234213 kubelet[3195]: E0813 00:22:53.234518 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.234213 kubelet[3195]: W0813 00:22:53.234678 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.234213 kubelet[3195]: E0813 00:22:53.234704 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.234213 kubelet[3195]: E0813 00:22:53.235241 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.234213 kubelet[3195]: W0813 00:22:53.235254 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.234213 kubelet[3195]: E0813 00:22:53.235294 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.234213 kubelet[3195]: E0813 00:22:53.235517 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.237097 kubelet[3195]: W0813 00:22:53.235528 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.237097 kubelet[3195]: E0813 00:22:53.235575 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.237097 kubelet[3195]: E0813 00:22:53.235714 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.237097 kubelet[3195]: W0813 00:22:53.235724 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.237097 kubelet[3195]: E0813 00:22:53.235887 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.237097 kubelet[3195]: W0813 00:22:53.235895 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.237097 kubelet[3195]: E0813 00:22:53.236053 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.237097 kubelet[3195]: E0813 00:22:53.236080 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.237097 kubelet[3195]: E0813 00:22:53.236281 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.237097 kubelet[3195]: W0813 00:22:53.236296 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.237302 kubelet[3195]: E0813 00:22:53.236395 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.237302 kubelet[3195]: E0813 00:22:53.236852 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.237302 kubelet[3195]: W0813 00:22:53.236864 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.238142 kubelet[3195]: E0813 00:22:53.237998 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.238344 kubelet[3195]: E0813 00:22:53.238329 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.238501 kubelet[3195]: W0813 00:22:53.238404 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.238501 kubelet[3195]: E0813 00:22:53.238444 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.238724 kubelet[3195]: E0813 00:22:53.238710 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.238798 kubelet[3195]: W0813 00:22:53.238786 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.238949 kubelet[3195]: E0813 00:22:53.238864 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.239829 kubelet[3195]: E0813 00:22:53.239344 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.239829 kubelet[3195]: W0813 00:22:53.239358 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.239829 kubelet[3195]: E0813 00:22:53.239416 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.240167 kubelet[3195]: E0813 00:22:53.240149 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.240608 kubelet[3195]: W0813 00:22:53.240228 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.240608 kubelet[3195]: E0813 00:22:53.240424 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.241092 kubelet[3195]: E0813 00:22:53.240990 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.241092 kubelet[3195]: W0813 00:22:53.241006 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.241092 kubelet[3195]: E0813 00:22:53.241042 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.242511 kubelet[3195]: E0813 00:22:53.242392 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.242511 kubelet[3195]: W0813 00:22:53.242409 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.242601 kubelet[3195]: E0813 00:22:53.242544 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.243420 kubelet[3195]: E0813 00:22:53.243259 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.243420 kubelet[3195]: W0813 00:22:53.243280 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.243420 kubelet[3195]: E0813 00:22:53.243301 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.243812 kubelet[3195]: E0813 00:22:53.243668 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.243812 kubelet[3195]: W0813 00:22:53.243685 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.243812 kubelet[3195]: E0813 00:22:53.243704 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.244884 kubelet[3195]: E0813 00:22:53.244849 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.245049 kubelet[3195]: W0813 00:22:53.244964 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.245049 kubelet[3195]: E0813 00:22:53.244982 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.255694 containerd[1728]: time="2025-08-13T00:22:53.255116840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:53.256462 containerd[1728]: time="2025-08-13T00:22:53.256147562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:53.257991 containerd[1728]: time="2025-08-13T00:22:53.256398363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:53.257991 containerd[1728]: time="2025-08-13T00:22:53.257179845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:53.264315 kubelet[3195]: E0813 00:22:53.264221 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:53.264315 kubelet[3195]: W0813 00:22:53.264243 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:53.264315 kubelet[3195]: E0813 00:22:53.264263 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:53.299849 systemd[1]: Started cri-containerd-e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815.scope - libcontainer container e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815. Aug 13 00:22:53.345250 containerd[1728]: time="2025-08-13T00:22:53.345208839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lg6q7,Uid:fbf2dd0b-0729-4b4e-9bba-c749415a3d72,Namespace:calico-system,Attempt:0,} returns sandbox id \"e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815\"" Aug 13 00:22:54.307228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592348890.mount: Deactivated successfully. Aug 13 00:22:54.978254 containerd[1728]: time="2025-08-13T00:22:54.978200318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:54.980533 containerd[1728]: time="2025-08-13T00:22:54.980395563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:22:54.983022 containerd[1728]: time="2025-08-13T00:22:54.982973369Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:54.986920 containerd[1728]: time="2025-08-13T00:22:54.986856457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:54.987553 containerd[1728]: time="2025-08-13T00:22:54.987419778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.788491862s" Aug 13 00:22:54.987553 containerd[1728]: time="2025-08-13T00:22:54.987453218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:22:54.989082 containerd[1728]: time="2025-08-13T00:22:54.988800541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:22:55.001688 containerd[1728]: time="2025-08-13T00:22:55.001547289Z" level=info msg="CreateContainer within sandbox \"55c1f965682396eb6d263faebd933f1c9b3450e171549e5eb28070e922a4a0a3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:22:55.034768 containerd[1728]: time="2025-08-13T00:22:55.034724683Z" level=info msg="CreateContainer within sandbox \"55c1f965682396eb6d263faebd933f1c9b3450e171549e5eb28070e922a4a0a3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"084df00869cee3d40dd3b98d5ee53ff4caebd96f2b322fb68636c596b6c26027\"" Aug 13 00:22:55.035562 containerd[1728]: time="2025-08-13T00:22:55.035532884Z" level=info msg="StartContainer for \"084df00869cee3d40dd3b98d5ee53ff4caebd96f2b322fb68636c596b6c26027\"" Aug 13 00:22:55.067858 systemd[1]: Started cri-containerd-084df00869cee3d40dd3b98d5ee53ff4caebd96f2b322fb68636c596b6c26027.scope - libcontainer container 084df00869cee3d40dd3b98d5ee53ff4caebd96f2b322fb68636c596b6c26027. Aug 13 00:22:55.101070 kubelet[3195]: E0813 00:22:55.100585 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6v4g" podUID="f75dc80d-3a70-4464-bb0d-78154b4f7aab" Aug 13 00:22:55.101871 containerd[1728]: time="2025-08-13T00:22:55.101753670Z" level=info msg="StartContainer for \"084df00869cee3d40dd3b98d5ee53ff4caebd96f2b322fb68636c596b6c26027\" returns successfully" Aug 13 00:22:55.232986 kubelet[3195]: E0813 00:22:55.232885 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.232986 kubelet[3195]: W0813 00:22:55.232929 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.232986 kubelet[3195]: E0813 00:22:55.232954 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.233555 kubelet[3195]: E0813 00:22:55.233533 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.233555 kubelet[3195]: W0813 00:22:55.233550 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.233699 kubelet[3195]: E0813 00:22:55.233570 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.233894 kubelet[3195]: E0813 00:22:55.233871 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.233894 kubelet[3195]: W0813 00:22:55.233888 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.234063 kubelet[3195]: E0813 00:22:55.233900 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.234748 kubelet[3195]: E0813 00:22:55.234724 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.234803 kubelet[3195]: W0813 00:22:55.234753 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.234803 kubelet[3195]: E0813 00:22:55.234767 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.235031 kubelet[3195]: E0813 00:22:55.235013 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.235031 kubelet[3195]: W0813 00:22:55.235028 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.235189 kubelet[3195]: E0813 00:22:55.235040 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.235295 kubelet[3195]: E0813 00:22:55.235277 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.235295 kubelet[3195]: W0813 00:22:55.235292 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.235359 kubelet[3195]: E0813 00:22:55.235303 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.235922 kubelet[3195]: E0813 00:22:55.235900 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.235922 kubelet[3195]: W0813 00:22:55.235918 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.236013 kubelet[3195]: E0813 00:22:55.235930 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.236546 kubelet[3195]: E0813 00:22:55.236524 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.236607 kubelet[3195]: W0813 00:22:55.236541 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.236607 kubelet[3195]: E0813 00:22:55.236575 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.236960 kubelet[3195]: E0813 00:22:55.236901 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.236960 kubelet[3195]: W0813 00:22:55.236917 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.236960 kubelet[3195]: E0813 00:22:55.236930 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.237310 kubelet[3195]: E0813 00:22:55.237277 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.237310 kubelet[3195]: W0813 00:22:55.237306 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.237427 kubelet[3195]: E0813 00:22:55.237319 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.237608 kubelet[3195]: E0813 00:22:55.237586 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.237608 kubelet[3195]: W0813 00:22:55.237604 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.238475 kubelet[3195]: E0813 00:22:55.237615 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.238813 kubelet[3195]: E0813 00:22:55.238791 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.238813 kubelet[3195]: W0813 00:22:55.238808 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.238813 kubelet[3195]: E0813 00:22:55.238821 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.239097 kubelet[3195]: E0813 00:22:55.239080 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.239097 kubelet[3195]: W0813 00:22:55.239094 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.239166 kubelet[3195]: E0813 00:22:55.239105 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.239330 kubelet[3195]: E0813 00:22:55.239313 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.239330 kubelet[3195]: W0813 00:22:55.239327 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.239409 kubelet[3195]: E0813 00:22:55.239347 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.239593 kubelet[3195]: E0813 00:22:55.239563 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.239593 kubelet[3195]: W0813 00:22:55.239578 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.239593 kubelet[3195]: E0813 00:22:55.239595 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.251147 kubelet[3195]: E0813 00:22:55.251005 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.251147 kubelet[3195]: W0813 00:22:55.251029 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.251147 kubelet[3195]: E0813 00:22:55.251048 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.251505 kubelet[3195]: E0813 00:22:55.251436 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.251505 kubelet[3195]: W0813 00:22:55.251449 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.251505 kubelet[3195]: E0813 00:22:55.251468 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.252057 kubelet[3195]: E0813 00:22:55.252024 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.252057 kubelet[3195]: W0813 00:22:55.252050 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.252157 kubelet[3195]: E0813 00:22:55.252072 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.252882 kubelet[3195]: E0813 00:22:55.252860 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.252882 kubelet[3195]: W0813 00:22:55.252880 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.252974 kubelet[3195]: E0813 00:22:55.252907 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.253135 kubelet[3195]: E0813 00:22:55.253118 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.253135 kubelet[3195]: W0813 00:22:55.253134 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.253239 kubelet[3195]: E0813 00:22:55.253220 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.254067 kubelet[3195]: E0813 00:22:55.254040 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.254067 kubelet[3195]: W0813 00:22:55.254068 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.254261 kubelet[3195]: E0813 00:22:55.254171 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.254480 kubelet[3195]: E0813 00:22:55.254437 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.254480 kubelet[3195]: W0813 00:22:55.254459 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.254615 kubelet[3195]: E0813 00:22:55.254569 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.255727 kubelet[3195]: E0813 00:22:55.255697 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.255727 kubelet[3195]: W0813 00:22:55.255721 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.256030 kubelet[3195]: E0813 00:22:55.255836 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.256232 kubelet[3195]: E0813 00:22:55.256206 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.256232 kubelet[3195]: W0813 00:22:55.256228 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.256336 kubelet[3195]: E0813 00:22:55.256315 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.256549 kubelet[3195]: E0813 00:22:55.256524 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.256549 kubelet[3195]: W0813 00:22:55.256542 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.256771 kubelet[3195]: E0813 00:22:55.256755 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.256887 kubelet[3195]: E0813 00:22:55.256866 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.256887 kubelet[3195]: W0813 00:22:55.256883 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.256962 kubelet[3195]: E0813 00:22:55.256900 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.257159 kubelet[3195]: E0813 00:22:55.257115 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.257159 kubelet[3195]: W0813 00:22:55.257133 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.257159 kubelet[3195]: E0813 00:22:55.257152 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.257946 kubelet[3195]: E0813 00:22:55.257921 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.257946 kubelet[3195]: W0813 00:22:55.257944 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.258104 kubelet[3195]: E0813 00:22:55.258040 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.258745 kubelet[3195]: E0813 00:22:55.258716 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.258745 kubelet[3195]: W0813 00:22:55.258736 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.258842 kubelet[3195]: E0813 00:22:55.258756 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.259033 kubelet[3195]: E0813 00:22:55.259015 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.259033 kubelet[3195]: W0813 00:22:55.259030 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.259100 kubelet[3195]: E0813 00:22:55.259046 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.259618 kubelet[3195]: E0813 00:22:55.259592 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.259618 kubelet[3195]: W0813 00:22:55.259611 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.259766 kubelet[3195]: E0813 00:22:55.259726 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.260328 kubelet[3195]: E0813 00:22:55.260298 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.260328 kubelet[3195]: W0813 00:22:55.260321 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.260415 kubelet[3195]: E0813 00:22:55.260335 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:55.260836 kubelet[3195]: E0813 00:22:55.260806 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:55.260836 kubelet[3195]: W0813 00:22:55.260827 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:55.260911 kubelet[3195]: E0813 00:22:55.260841 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:56.125535 containerd[1728]: time="2025-08-13T00:22:56.125478354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:56.128269 containerd[1728]: time="2025-08-13T00:22:56.128107599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:22:56.130883 containerd[1728]: time="2025-08-13T00:22:56.130822164Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:56.134552 containerd[1728]: time="2025-08-13T00:22:56.134487451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:56.135435 containerd[1728]: time="2025-08-13T00:22:56.135300653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.146279591s" Aug 13 00:22:56.135435 containerd[1728]: time="2025-08-13T00:22:56.135338293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:22:56.137666 containerd[1728]: time="2025-08-13T00:22:56.137599217Z" level=info msg="CreateContainer within sandbox \"e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:22:56.169800 containerd[1728]: time="2025-08-13T00:22:56.169753439Z" level=info msg="CreateContainer within sandbox \"e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b\"" Aug 13 00:22:56.170762 containerd[1728]: time="2025-08-13T00:22:56.170332680Z" level=info msg="StartContainer for \"1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b\"" Aug 13 00:22:56.189110 kubelet[3195]: I0813 00:22:56.189043 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:22:56.203983 systemd[1]: Started cri-containerd-1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b.scope - libcontainer container 1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b. Aug 13 00:22:56.239689 containerd[1728]: time="2025-08-13T00:22:56.239173048Z" level=info msg="StartContainer for \"1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b\" returns successfully" Aug 13 00:22:56.247831 kubelet[3195]: E0813 00:22:56.247712 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:56.248187 kubelet[3195]: W0813 00:22:56.247795 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:56.248187 kubelet[3195]: E0813 00:22:56.247963 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:56.248338 kubelet[3195]: E0813 00:22:56.248309 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:56.248338 kubelet[3195]: W0813 00:22:56.248329 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:56.248489 kubelet[3195]: E0813 00:22:56.248441 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:56.249201 kubelet[3195]: E0813 00:22:56.249144 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:56.249201 kubelet[3195]: W0813 00:22:56.249190 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:56.249201 kubelet[3195]: E0813 00:22:56.249204 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:56.249856 kubelet[3195]: E0813 00:22:56.249828 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:56.249856 kubelet[3195]: W0813 00:22:56.249848 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:56.249945 kubelet[3195]: E0813 00:22:56.249862 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:56.250418 kubelet[3195]: E0813 00:22:56.250100 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:56.250418 kubelet[3195]: W0813 00:22:56.250133 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:56.250418 kubelet[3195]: E0813 00:22:56.250145 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:56.250418 kubelet[3195]: E0813 00:22:56.250332 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:56.250418 kubelet[3195]: W0813 00:22:56.250341 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:56.250418 kubelet[3195]: E0813 00:22:56.250351 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:56.250582 kubelet[3195]: E0813 00:22:56.250553 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:22:56.250582 kubelet[3195]: W0813 00:22:56.250562 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:22:56.250582 kubelet[3195]: E0813 00:22:56.250571 3195 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:22:56.252475 systemd[1]: cri-containerd-1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b.scope: Deactivated successfully. Aug 13 00:22:56.278043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b-rootfs.mount: Deactivated successfully. Aug 13 00:22:57.101114 kubelet[3195]: E0813 00:22:57.101063 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6v4g" podUID="f75dc80d-3a70-4464-bb0d-78154b4f7aab" Aug 13 00:22:57.212186 kubelet[3195]: I0813 00:22:57.211146 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84cdcffc59-xsmws" podStartSLOduration=3.420922578 podStartE2EDuration="5.211130004s" podCreationTimestamp="2025-08-13 00:22:52 +0000 UTC" firstStartedPulling="2025-08-13 00:22:53.198274755 +0000 UTC m=+25.588170366" lastFinishedPulling="2025-08-13 00:22:54.988482221 +0000 UTC m=+27.378377792" observedRunningTime="2025-08-13 00:22:55.202085572 +0000 UTC m=+27.591981183" watchObservedRunningTime="2025-08-13 00:22:57.211130004 +0000 UTC m=+29.601025615" Aug 13 00:22:57.382872 containerd[1728]: time="2025-08-13T00:22:57.382367179Z" level=info msg="shim disconnected" id=1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b namespace=k8s.io Aug 13 00:22:57.382872 containerd[1728]: time="2025-08-13T00:22:57.382420499Z" level=warning msg="cleaning up after shim disconnected" id=1070828e9f00c42d34c51bce684b86a7832fb200667de0d1d4bf674948ad333b namespace=k8s.io Aug 13 00:22:57.382872 containerd[1728]: time="2025-08-13T00:22:57.382428779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:58.200866 containerd[1728]: time="2025-08-13T00:22:58.200589710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:22:59.100590 kubelet[3195]: E0813 00:22:59.100511 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6v4g" podUID="f75dc80d-3a70-4464-bb0d-78154b4f7aab" Aug 13 00:23:00.511056 containerd[1728]: time="2025-08-13T00:23:00.511004495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:00.514385 containerd[1728]: time="2025-08-13T00:23:00.514186380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:23:00.517879 containerd[1728]: time="2025-08-13T00:23:00.517739466Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:00.523048 containerd[1728]: time="2025-08-13T00:23:00.522983235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:00.524188 containerd[1728]: time="2025-08-13T00:23:00.523564796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.322908046s" Aug 13 00:23:00.524188 containerd[1728]: time="2025-08-13T00:23:00.523600396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:23:00.527947 containerd[1728]: time="2025-08-13T00:23:00.527916364Z" level=info msg="CreateContainer within sandbox \"e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:23:00.563120 containerd[1728]: time="2025-08-13T00:23:00.563059784Z" level=info msg="CreateContainer within sandbox \"e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2\"" Aug 13 00:23:00.564419 containerd[1728]: time="2025-08-13T00:23:00.564338147Z" level=info msg="StartContainer for \"6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2\"" Aug 13 00:23:00.606873 systemd[1]: Started cri-containerd-6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2.scope - libcontainer container 6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2. Aug 13 00:23:00.643485 containerd[1728]: time="2025-08-13T00:23:00.643331443Z" level=info msg="StartContainer for \"6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2\" returns successfully" Aug 13 00:23:01.100826 kubelet[3195]: E0813 00:23:01.100746 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l6v4g" podUID="f75dc80d-3a70-4464-bb0d-78154b4f7aab" Aug 13 00:23:01.910795 containerd[1728]: time="2025-08-13T00:23:01.910679068Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:23:01.913311 systemd[1]: cri-containerd-6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2.scope: Deactivated successfully. Aug 13 00:23:01.936571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2-rootfs.mount: Deactivated successfully. Aug 13 00:23:01.993478 kubelet[3195]: I0813 00:23:01.993449 3195 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:23:02.370823 kubelet[3195]: I0813 00:23:02.102900 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9cfaf342-f7c6-417f-a4c3-39fca511cded-goldmane-key-pair\") pod \"goldmane-58fd7646b9-wvtl5\" (UID: \"9cfaf342-f7c6-417f-a4c3-39fca511cded\") " pod="calico-system/goldmane-58fd7646b9-wvtl5" Aug 13 00:23:02.370823 kubelet[3195]: I0813 00:23:02.102937 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-whisker-ca-bundle\") pod \"whisker-546fcc447-djg2f\" (UID: \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\") " pod="calico-system/whisker-546fcc447-djg2f" Aug 13 00:23:02.370823 kubelet[3195]: I0813 00:23:02.102956 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25zbd\" (UniqueName: \"kubernetes.io/projected/eacb4a1e-81fe-44e2-8375-85476e370ebd-kube-api-access-25zbd\") pod \"calico-kube-controllers-76ddbf5f64-sh7tp\" (UID: \"eacb4a1e-81fe-44e2-8375-85476e370ebd\") " pod="calico-system/calico-kube-controllers-76ddbf5f64-sh7tp" Aug 13 00:23:02.370823 kubelet[3195]: I0813 00:23:02.102976 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eacb4a1e-81fe-44e2-8375-85476e370ebd-tigera-ca-bundle\") pod \"calico-kube-controllers-76ddbf5f64-sh7tp\" (UID: \"eacb4a1e-81fe-44e2-8375-85476e370ebd\") " pod="calico-system/calico-kube-controllers-76ddbf5f64-sh7tp" Aug 13 00:23:02.370823 kubelet[3195]: I0813 00:23:02.103004 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ed9f975-b4fd-44f7-a88d-65b130cbc3e0-config-volume\") pod \"coredns-7c65d6cfc9-gm4v8\" (UID: \"4ed9f975-b4fd-44f7-a88d-65b130cbc3e0\") " pod="kube-system/coredns-7c65d6cfc9-gm4v8" Aug 13 00:23:02.036982 systemd[1]: Created slice kubepods-burstable-pod4056208b_d9c3_4786_99a2_567d10cf8d83.slice - libcontainer container kubepods-burstable-pod4056208b_d9c3_4786_99a2_567d10cf8d83.slice. Aug 13 00:23:02.371310 kubelet[3195]: I0813 00:23:02.103022 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-whisker-backend-key-pair\") pod \"whisker-546fcc447-djg2f\" (UID: \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\") " pod="calico-system/whisker-546fcc447-djg2f" Aug 13 00:23:02.371310 kubelet[3195]: I0813 00:23:02.103039 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fd4z\" (UniqueName: \"kubernetes.io/projected/4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a-kube-api-access-6fd4z\") pod \"calico-apiserver-7b596cbbd-vbl26\" (UID: \"4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a\") " pod="calico-apiserver/calico-apiserver-7b596cbbd-vbl26" Aug 13 00:23:02.371310 kubelet[3195]: I0813 00:23:02.103057 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg252\" (UniqueName: \"kubernetes.io/projected/4ed9f975-b4fd-44f7-a88d-65b130cbc3e0-kube-api-access-gg252\") pod \"coredns-7c65d6cfc9-gm4v8\" (UID: \"4ed9f975-b4fd-44f7-a88d-65b130cbc3e0\") " pod="kube-system/coredns-7c65d6cfc9-gm4v8" Aug 13 00:23:02.371310 kubelet[3195]: I0813 00:23:02.103072 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cfaf342-f7c6-417f-a4c3-39fca511cded-config\") pod \"goldmane-58fd7646b9-wvtl5\" (UID: \"9cfaf342-f7c6-417f-a4c3-39fca511cded\") " pod="calico-system/goldmane-58fd7646b9-wvtl5" Aug 13 00:23:02.371310 kubelet[3195]: I0813 00:23:02.103091 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cfaf342-f7c6-417f-a4c3-39fca511cded-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-wvtl5\" (UID: \"9cfaf342-f7c6-417f-a4c3-39fca511cded\") " pod="calico-system/goldmane-58fd7646b9-wvtl5" Aug 13 00:23:02.058387 systemd[1]: Created slice kubepods-besteffort-podeacb4a1e_81fe_44e2_8375_85476e370ebd.slice - libcontainer container kubepods-besteffort-podeacb4a1e_81fe_44e2_8375_85476e370ebd.slice. Aug 13 00:23:02.371461 kubelet[3195]: I0813 00:23:02.103108 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whgjj\" (UniqueName: \"kubernetes.io/projected/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-kube-api-access-whgjj\") pod \"whisker-546fcc447-djg2f\" (UID: \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\") " pod="calico-system/whisker-546fcc447-djg2f" Aug 13 00:23:02.371461 kubelet[3195]: I0813 00:23:02.103129 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnd4x\" (UniqueName: \"kubernetes.io/projected/9cfaf342-f7c6-417f-a4c3-39fca511cded-kube-api-access-mnd4x\") pod \"goldmane-58fd7646b9-wvtl5\" (UID: \"9cfaf342-f7c6-417f-a4c3-39fca511cded\") " pod="calico-system/goldmane-58fd7646b9-wvtl5" Aug 13 00:23:02.371461 kubelet[3195]: I0813 00:23:02.103144 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a-calico-apiserver-certs\") pod \"calico-apiserver-7b596cbbd-vbl26\" (UID: \"4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a\") " pod="calico-apiserver/calico-apiserver-7b596cbbd-vbl26" Aug 13 00:23:02.371461 kubelet[3195]: I0813 00:23:02.103162 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7gv\" (UniqueName: \"kubernetes.io/projected/4056208b-d9c3-4786-99a2-567d10cf8d83-kube-api-access-fm7gv\") pod \"coredns-7c65d6cfc9-2x28l\" (UID: \"4056208b-d9c3-4786-99a2-567d10cf8d83\") " pod="kube-system/coredns-7c65d6cfc9-2x28l" Aug 13 00:23:02.371461 kubelet[3195]: I0813 00:23:02.103177 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef19092f-3405-456e-849a-6f31b80acace-calico-apiserver-certs\") pod \"calico-apiserver-7b596cbbd-dc5df\" (UID: \"ef19092f-3405-456e-849a-6f31b80acace\") " pod="calico-apiserver/calico-apiserver-7b596cbbd-dc5df" Aug 13 00:23:02.063977 systemd[1]: Created slice kubepods-besteffort-podef19092f_3405_456e_849a_6f31b80acace.slice - libcontainer container kubepods-besteffort-podef19092f_3405_456e_849a_6f31b80acace.slice. Aug 13 00:23:02.371608 kubelet[3195]: I0813 00:23:02.103196 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2mc\" (UniqueName: \"kubernetes.io/projected/ef19092f-3405-456e-849a-6f31b80acace-kube-api-access-bz2mc\") pod \"calico-apiserver-7b596cbbd-dc5df\" (UID: \"ef19092f-3405-456e-849a-6f31b80acace\") " pod="calico-apiserver/calico-apiserver-7b596cbbd-dc5df" Aug 13 00:23:02.371608 kubelet[3195]: I0813 00:23:02.103212 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4056208b-d9c3-4786-99a2-567d10cf8d83-config-volume\") pod \"coredns-7c65d6cfc9-2x28l\" (UID: \"4056208b-d9c3-4786-99a2-567d10cf8d83\") " pod="kube-system/coredns-7c65d6cfc9-2x28l" Aug 13 00:23:02.088192 systemd[1]: Created slice kubepods-besteffort-podb4cd9644_0d9b_47e1_8c1b_e054338d94ce.slice - libcontainer container kubepods-besteffort-podb4cd9644_0d9b_47e1_8c1b_e054338d94ce.slice. Aug 13 00:23:02.099951 systemd[1]: Created slice kubepods-besteffort-pod4f3ad36d_19e4_4d1b_bbd6_393f7e34f68a.slice - libcontainer container kubepods-besteffort-pod4f3ad36d_19e4_4d1b_bbd6_393f7e34f68a.slice. Aug 13 00:23:02.101826 systemd[1]: Created slice kubepods-burstable-pod4ed9f975_b4fd_44f7_a88d_65b130cbc3e0.slice - libcontainer container kubepods-burstable-pod4ed9f975_b4fd_44f7_a88d_65b130cbc3e0.slice. Aug 13 00:23:02.113423 systemd[1]: Created slice kubepods-besteffort-pod9cfaf342_f7c6_417f_a4c3_39fca511cded.slice - libcontainer container kubepods-besteffort-pod9cfaf342_f7c6_417f_a4c3_39fca511cded.slice. Aug 13 00:23:02.719610 containerd[1728]: time="2025-08-13T00:23:02.719560503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2x28l,Uid:4056208b-d9c3-4786-99a2-567d10cf8d83,Namespace:kube-system,Attempt:0,}" Aug 13 00:23:02.723976 containerd[1728]: time="2025-08-13T00:23:02.723932911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b596cbbd-vbl26,Uid:4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:23:02.724406 containerd[1728]: time="2025-08-13T00:23:02.724121751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ddbf5f64-sh7tp,Uid:eacb4a1e-81fe-44e2-8375-85476e370ebd,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:02.724926 containerd[1728]: time="2025-08-13T00:23:02.724679112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gm4v8,Uid:4ed9f975-b4fd-44f7-a88d-65b130cbc3e0,Namespace:kube-system,Attempt:0,}" Aug 13 00:23:02.727335 containerd[1728]: time="2025-08-13T00:23:02.727134556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-546fcc447-djg2f,Uid:b4cd9644-0d9b-47e1-8c1b-e054338d94ce,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:02.727739 containerd[1728]: time="2025-08-13T00:23:02.727195836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b596cbbd-dc5df,Uid:ef19092f-3405-456e-849a-6f31b80acace,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:23:02.727739 containerd[1728]: time="2025-08-13T00:23:02.727625317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wvtl5,Uid:9cfaf342-f7c6-417f-a4c3-39fca511cded,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:02.830930 containerd[1728]: time="2025-08-13T00:23:02.830874975Z" level=info msg="shim disconnected" id=6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2 namespace=k8s.io Aug 13 00:23:02.831182 containerd[1728]: time="2025-08-13T00:23:02.831098656Z" level=warning msg="cleaning up after shim disconnected" id=6cfc884ed4d1ffd7beff68d2df0f569f14709c1f877a90c2c14bd39224faf0d2 namespace=k8s.io Aug 13 00:23:02.831182 containerd[1728]: time="2025-08-13T00:23:02.831114136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:03.111188 systemd[1]: Created slice kubepods-besteffort-podf75dc80d_3a70_4464_bb0d_78154b4f7aab.slice - libcontainer container kubepods-besteffort-podf75dc80d_3a70_4464_bb0d_78154b4f7aab.slice. Aug 13 00:23:03.118701 containerd[1728]: time="2025-08-13T00:23:03.117618510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6v4g,Uid:f75dc80d-3a70-4464-bb0d-78154b4f7aab,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:03.131073 containerd[1728]: time="2025-08-13T00:23:03.131026333Z" level=error msg="Failed to destroy network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.131778 containerd[1728]: time="2025-08-13T00:23:03.131745694Z" level=error msg="encountered an error cleaning up failed sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.133592 containerd[1728]: time="2025-08-13T00:23:03.133525577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b596cbbd-vbl26,Uid:4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.134629 kubelet[3195]: E0813 00:23:03.134264 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.134629 kubelet[3195]: E0813 00:23:03.134362 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b596cbbd-vbl26" Aug 13 00:23:03.134629 kubelet[3195]: E0813 00:23:03.134380 3195 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b596cbbd-vbl26" Aug 13 00:23:03.134825 kubelet[3195]: E0813 00:23:03.134421 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b596cbbd-vbl26_calico-apiserver(4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b596cbbd-vbl26_calico-apiserver(4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b596cbbd-vbl26" podUID="4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a" Aug 13 00:23:03.190675 containerd[1728]: time="2025-08-13T00:23:03.190606636Z" level=error msg="Failed to destroy network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.191909 containerd[1728]: time="2025-08-13T00:23:03.191788038Z" level=error msg="encountered an error cleaning up failed sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.191909 containerd[1728]: time="2025-08-13T00:23:03.191862158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wvtl5,Uid:9cfaf342-f7c6-417f-a4c3-39fca511cded,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.192276 kubelet[3195]: E0813 00:23:03.192242 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.192733 kubelet[3195]: E0813 00:23:03.192552 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-wvtl5" Aug 13 00:23:03.192733 kubelet[3195]: E0813 00:23:03.192595 3195 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-wvtl5" Aug 13 00:23:03.192733 kubelet[3195]: E0813 00:23:03.192665 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-wvtl5_calico-system(9cfaf342-f7c6-417f-a4c3-39fca511cded)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-wvtl5_calico-system(9cfaf342-f7c6-417f-a4c3-39fca511cded)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-wvtl5" podUID="9cfaf342-f7c6-417f-a4c3-39fca511cded" Aug 13 00:23:03.203696 containerd[1728]: time="2025-08-13T00:23:03.203297497Z" level=error msg="Failed to destroy network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.207279 containerd[1728]: time="2025-08-13T00:23:03.207207864Z" level=error msg="encountered an error cleaning up failed sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.207442 containerd[1728]: time="2025-08-13T00:23:03.207389584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2x28l,Uid:4056208b-d9c3-4786-99a2-567d10cf8d83,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.209260 kubelet[3195]: E0813 00:23:03.209225 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.209674 kubelet[3195]: E0813 00:23:03.209423 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2x28l" Aug 13 00:23:03.209674 kubelet[3195]: E0813 00:23:03.209607 3195 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2x28l" Aug 13 00:23:03.210394 kubelet[3195]: E0813 00:23:03.210019 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2x28l_kube-system(4056208b-d9c3-4786-99a2-567d10cf8d83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2x28l_kube-system(4056208b-d9c3-4786-99a2-567d10cf8d83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2x28l" podUID="4056208b-d9c3-4786-99a2-567d10cf8d83" Aug 13 00:23:03.217171 kubelet[3195]: I0813 00:23:03.216559 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:03.217344 containerd[1728]: time="2025-08-13T00:23:03.217311122Z" level=info msg="StopPodSandbox for \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\"" Aug 13 00:23:03.218081 containerd[1728]: time="2025-08-13T00:23:03.217547202Z" level=info msg="Ensure that sandbox 61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9 in task-service has been cleanup successfully" Aug 13 00:23:03.224059 containerd[1728]: time="2025-08-13T00:23:03.223917453Z" level=error msg="Failed to destroy network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.226137 containerd[1728]: time="2025-08-13T00:23:03.225992177Z" level=error msg="encountered an error cleaning up failed sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.226137 containerd[1728]: time="2025-08-13T00:23:03.226096937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gm4v8,Uid:4ed9f975-b4fd-44f7-a88d-65b130cbc3e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.226393 kubelet[3195]: E0813 00:23:03.226352 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.226469 kubelet[3195]: E0813 00:23:03.226410 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gm4v8" Aug 13 00:23:03.226469 kubelet[3195]: E0813 00:23:03.226429 3195 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gm4v8" Aug 13 00:23:03.226549 kubelet[3195]: E0813 00:23:03.226469 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gm4v8_kube-system(4ed9f975-b4fd-44f7-a88d-65b130cbc3e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gm4v8_kube-system(4ed9f975-b4fd-44f7-a88d-65b130cbc3e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gm4v8" podUID="4ed9f975-b4fd-44f7-a88d-65b130cbc3e0" Aug 13 00:23:03.228246 containerd[1728]: time="2025-08-13T00:23:03.228131740Z" level=error msg="Failed to destroy network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.231948 containerd[1728]: time="2025-08-13T00:23:03.231901307Z" level=error msg="encountered an error cleaning up failed sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.232033 containerd[1728]: time="2025-08-13T00:23:03.231964587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-546fcc447-djg2f,Uid:b4cd9644-0d9b-47e1-8c1b-e054338d94ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.232314 kubelet[3195]: E0813 00:23:03.232202 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.232314 kubelet[3195]: E0813 00:23:03.232269 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-546fcc447-djg2f" Aug 13 00:23:03.232314 kubelet[3195]: E0813 00:23:03.232287 3195 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-546fcc447-djg2f" Aug 13 00:23:03.232435 kubelet[3195]: E0813 00:23:03.232345 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-546fcc447-djg2f_calico-system(b4cd9644-0d9b-47e1-8c1b-e054338d94ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-546fcc447-djg2f_calico-system(b4cd9644-0d9b-47e1-8c1b-e054338d94ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-546fcc447-djg2f" podUID="b4cd9644-0d9b-47e1-8c1b-e054338d94ce" Aug 13 00:23:03.234617 containerd[1728]: time="2025-08-13T00:23:03.234138831Z" level=error msg="Failed to destroy network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.238818 containerd[1728]: time="2025-08-13T00:23:03.236592795Z" level=error msg="encountered an error cleaning up failed sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.238818 containerd[1728]: time="2025-08-13T00:23:03.236695355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b596cbbd-dc5df,Uid:ef19092f-3405-456e-849a-6f31b80acace,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.238818 containerd[1728]: time="2025-08-13T00:23:03.236862155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:23:03.239035 kubelet[3195]: E0813 00:23:03.238374 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.239035 kubelet[3195]: E0813 00:23:03.238454 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b596cbbd-dc5df" Aug 13 00:23:03.239035 kubelet[3195]: E0813 00:23:03.238476 3195 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b596cbbd-dc5df" Aug 13 00:23:03.239120 kubelet[3195]: E0813 00:23:03.238514 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b596cbbd-dc5df_calico-apiserver(ef19092f-3405-456e-849a-6f31b80acace)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b596cbbd-dc5df_calico-apiserver(ef19092f-3405-456e-849a-6f31b80acace)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b596cbbd-dc5df" podUID="ef19092f-3405-456e-849a-6f31b80acace" Aug 13 00:23:03.240366 kubelet[3195]: I0813 00:23:03.240231 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:03.243145 kubelet[3195]: I0813 00:23:03.243126 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:03.243716 containerd[1728]: time="2025-08-13T00:23:03.243679647Z" level=error msg="Failed to destroy network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.244901 containerd[1728]: time="2025-08-13T00:23:03.244806929Z" level=info msg="StopPodSandbox for \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\"" Aug 13 00:23:03.245588 containerd[1728]: time="2025-08-13T00:23:03.245389490Z" level=info msg="Ensure that sandbox 8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a in task-service has been cleanup successfully" Aug 13 00:23:03.248392 containerd[1728]: time="2025-08-13T00:23:03.245861051Z" level=info msg="StopPodSandbox for \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\"" Aug 13 00:23:03.248913 containerd[1728]: time="2025-08-13T00:23:03.248718856Z" level=info msg="Ensure that sandbox 5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818 in task-service has been cleanup successfully" Aug 13 00:23:03.250353 containerd[1728]: time="2025-08-13T00:23:03.246162411Z" level=error msg="encountered an error cleaning up failed sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.250353 containerd[1728]: time="2025-08-13T00:23:03.250271618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ddbf5f64-sh7tp,Uid:eacb4a1e-81fe-44e2-8375-85476e370ebd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.250789 kubelet[3195]: E0813 00:23:03.250739 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.250954 kubelet[3195]: E0813 00:23:03.250804 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ddbf5f64-sh7tp" Aug 13 00:23:03.250954 kubelet[3195]: E0813 00:23:03.250825 3195 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ddbf5f64-sh7tp" Aug 13 00:23:03.250954 kubelet[3195]: E0813 00:23:03.250866 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ddbf5f64-sh7tp_calico-system(eacb4a1e-81fe-44e2-8375-85476e370ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ddbf5f64-sh7tp_calico-system(eacb4a1e-81fe-44e2-8375-85476e370ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ddbf5f64-sh7tp" podUID="eacb4a1e-81fe-44e2-8375-85476e370ebd" Aug 13 00:23:03.305406 containerd[1728]: time="2025-08-13T00:23:03.305239633Z" level=error msg="StopPodSandbox for \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\" failed" error="failed to destroy network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.306096 kubelet[3195]: E0813 00:23:03.305826 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:03.306096 kubelet[3195]: E0813 00:23:03.305911 3195 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818"} Aug 13 00:23:03.306096 kubelet[3195]: E0813 00:23:03.305988 3195 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:03.306096 kubelet[3195]: E0813 00:23:03.306058 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b596cbbd-vbl26" podUID="4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a" Aug 13 00:23:03.320800 containerd[1728]: time="2025-08-13T00:23:03.320622940Z" level=error msg="StopPodSandbox for \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\" failed" error="failed to destroy network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.321253 kubelet[3195]: E0813 00:23:03.321204 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:03.321374 kubelet[3195]: E0813 00:23:03.321354 3195 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9"} Aug 13 00:23:03.321517 kubelet[3195]: E0813 00:23:03.321447 3195 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4056208b-d9c3-4786-99a2-567d10cf8d83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:03.321627 kubelet[3195]: E0813 00:23:03.321473 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4056208b-d9c3-4786-99a2-567d10cf8d83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2x28l" podUID="4056208b-d9c3-4786-99a2-567d10cf8d83" Aug 13 00:23:03.341807 containerd[1728]: time="2025-08-13T00:23:03.341762136Z" level=error msg="StopPodSandbox for \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\" failed" error="failed to destroy network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.342437 kubelet[3195]: E0813 00:23:03.342181 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:03.342437 kubelet[3195]: E0813 00:23:03.342246 3195 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a"} Aug 13 00:23:03.342437 kubelet[3195]: E0813 00:23:03.342283 3195 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9cfaf342-f7c6-417f-a4c3-39fca511cded\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:03.342437 kubelet[3195]: E0813 00:23:03.342314 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9cfaf342-f7c6-417f-a4c3-39fca511cded\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-wvtl5" podUID="9cfaf342-f7c6-417f-a4c3-39fca511cded" Aug 13 00:23:03.348076 containerd[1728]: time="2025-08-13T00:23:03.348016867Z" level=error msg="Failed to destroy network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.348604 containerd[1728]: time="2025-08-13T00:23:03.348549148Z" level=error msg="encountered an error cleaning up failed sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.348973 containerd[1728]: time="2025-08-13T00:23:03.348771068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6v4g,Uid:f75dc80d-3a70-4464-bb0d-78154b4f7aab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.349086 kubelet[3195]: E0813 00:23:03.349030 3195 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:03.349178 kubelet[3195]: E0813 00:23:03.349087 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6v4g" Aug 13 00:23:03.349178 kubelet[3195]: E0813 00:23:03.349107 3195 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l6v4g" Aug 13 00:23:03.349178 kubelet[3195]: E0813 00:23:03.349151 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l6v4g_calico-system(f75dc80d-3a70-4464-bb0d-78154b4f7aab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l6v4g_calico-system(f75dc80d-3a70-4464-bb0d-78154b4f7aab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6v4g" podUID="f75dc80d-3a70-4464-bb0d-78154b4f7aab" Aug 13 00:23:03.938605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c-shm.mount: Deactivated successfully. Aug 13 00:23:03.938712 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9-shm.mount: Deactivated successfully. Aug 13 00:23:03.938793 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818-shm.mount: Deactivated successfully. Aug 13 00:23:04.246328 kubelet[3195]: I0813 00:23:04.246290 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:04.248597 containerd[1728]: time="2025-08-13T00:23:04.248420864Z" level=info msg="StopPodSandbox for \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\"" Aug 13 00:23:04.250409 containerd[1728]: time="2025-08-13T00:23:04.248701145Z" level=info msg="Ensure that sandbox e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f in task-service has been cleanup successfully" Aug 13 00:23:04.251995 kubelet[3195]: I0813 00:23:04.251831 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:04.253487 containerd[1728]: time="2025-08-13T00:23:04.253029553Z" level=info msg="StopPodSandbox for \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\"" Aug 13 00:23:04.253487 containerd[1728]: time="2025-08-13T00:23:04.253207553Z" level=info msg="Ensure that sandbox 63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5 in task-service has been cleanup successfully" Aug 13 00:23:04.258420 kubelet[3195]: I0813 00:23:04.257796 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:04.260899 containerd[1728]: time="2025-08-13T00:23:04.260804688Z" level=info msg="StopPodSandbox for \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\"" Aug 13 00:23:04.261174 containerd[1728]: time="2025-08-13T00:23:04.261150769Z" level=info msg="Ensure that sandbox 7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8 in task-service has been cleanup successfully" Aug 13 00:23:04.270033 kubelet[3195]: I0813 00:23:04.270001 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:04.276384 containerd[1728]: time="2025-08-13T00:23:04.276348278Z" level=info msg="StopPodSandbox for \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\"" Aug 13 00:23:04.277741 containerd[1728]: time="2025-08-13T00:23:04.277433800Z" level=info msg="Ensure that sandbox be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c in task-service has been cleanup successfully" Aug 13 00:23:04.284139 kubelet[3195]: I0813 00:23:04.284117 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:04.286313 containerd[1728]: time="2025-08-13T00:23:04.286270657Z" level=info msg="StopPodSandbox for \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\"" Aug 13 00:23:04.288499 containerd[1728]: time="2025-08-13T00:23:04.288462342Z" level=info msg="Ensure that sandbox c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966 in task-service has been cleanup successfully" Aug 13 00:23:04.353436 containerd[1728]: time="2025-08-13T00:23:04.353281427Z" level=error msg="StopPodSandbox for \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\" failed" error="failed to destroy network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:04.353599 kubelet[3195]: E0813 00:23:04.353556 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:04.353661 kubelet[3195]: E0813 00:23:04.353603 3195 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c"} Aug 13 00:23:04.353661 kubelet[3195]: E0813 00:23:04.353651 3195 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eacb4a1e-81fe-44e2-8375-85476e370ebd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:04.353736 kubelet[3195]: E0813 00:23:04.353675 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eacb4a1e-81fe-44e2-8375-85476e370ebd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ddbf5f64-sh7tp" podUID="eacb4a1e-81fe-44e2-8375-85476e370ebd" Aug 13 00:23:04.364559 containerd[1728]: time="2025-08-13T00:23:04.364117488Z" level=error msg="StopPodSandbox for \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\" failed" error="failed to destroy network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:04.364559 containerd[1728]: time="2025-08-13T00:23:04.364243288Z" level=error msg="StopPodSandbox for \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\" failed" error="failed to destroy network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:04.364758 kubelet[3195]: E0813 00:23:04.364412 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:04.364758 kubelet[3195]: E0813 00:23:04.364412 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:04.364758 kubelet[3195]: E0813 00:23:04.364460 3195 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f"} Aug 13 00:23:04.364758 kubelet[3195]: E0813 00:23:04.364474 3195 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8"} Aug 13 00:23:04.364758 kubelet[3195]: E0813 00:23:04.364495 3195 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:04.364932 kubelet[3195]: E0813 00:23:04.364501 3195 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef19092f-3405-456e-849a-6f31b80acace\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:04.364932 kubelet[3195]: E0813 00:23:04.364516 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-546fcc447-djg2f" podUID="b4cd9644-0d9b-47e1-8c1b-e054338d94ce" Aug 13 00:23:04.364932 kubelet[3195]: E0813 00:23:04.364519 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef19092f-3405-456e-849a-6f31b80acace\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b596cbbd-dc5df" podUID="ef19092f-3405-456e-849a-6f31b80acace" Aug 13 00:23:04.370603 containerd[1728]: time="2025-08-13T00:23:04.370540021Z" level=error msg="StopPodSandbox for \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\" failed" error="failed to destroy network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:04.371287 kubelet[3195]: E0813 00:23:04.370815 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:04.371287 kubelet[3195]: E0813 00:23:04.370938 3195 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966"} Aug 13 00:23:04.371287 kubelet[3195]: E0813 00:23:04.370988 3195 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f75dc80d-3a70-4464-bb0d-78154b4f7aab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:04.371287 kubelet[3195]: E0813 00:23:04.371014 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f75dc80d-3a70-4464-bb0d-78154b4f7aab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l6v4g" podUID="f75dc80d-3a70-4464-bb0d-78154b4f7aab" Aug 13 00:23:04.372678 containerd[1728]: time="2025-08-13T00:23:04.372325104Z" level=error msg="StopPodSandbox for \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\" failed" error="failed to destroy network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:23:04.372747 kubelet[3195]: E0813 00:23:04.372486 3195 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:04.372747 kubelet[3195]: E0813 00:23:04.372517 3195 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5"} Aug 13 00:23:04.372747 kubelet[3195]: E0813 00:23:04.372541 3195 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ed9f975-b4fd-44f7-a88d-65b130cbc3e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:23:04.372747 kubelet[3195]: E0813 00:23:04.372565 3195 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ed9f975-b4fd-44f7-a88d-65b130cbc3e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gm4v8" podUID="4ed9f975-b4fd-44f7-a88d-65b130cbc3e0" Aug 13 00:23:07.142228 kubelet[3195]: I0813 00:23:07.142190 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:07.765772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975235232.mount: Deactivated successfully. Aug 13 00:23:07.805879 containerd[1728]: time="2025-08-13T00:23:07.805796187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:07.812309 containerd[1728]: time="2025-08-13T00:23:07.812257160Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:07.812606 containerd[1728]: time="2025-08-13T00:23:07.812572921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:23:07.823602 containerd[1728]: time="2025-08-13T00:23:07.823500422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:07.824579 containerd[1728]: time="2025-08-13T00:23:07.824537744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.587649749s" Aug 13 00:23:07.824673 containerd[1728]: time="2025-08-13T00:23:07.824582624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:23:07.836794 containerd[1728]: time="2025-08-13T00:23:07.836583047Z" level=info msg="CreateContainer within sandbox \"e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:23:07.890026 containerd[1728]: time="2025-08-13T00:23:07.889930310Z" level=info msg="CreateContainer within sandbox \"e92bbdcb447179df9e62af0d52ef035b05886b905a9692279aabb7fc927d3815\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ded39f7a2f5a62e5b967f48908313ad64cb6caed0ce39fe31d06e6e597cb7ef1\"" Aug 13 00:23:07.891360 containerd[1728]: time="2025-08-13T00:23:07.891284993Z" level=info msg="StartContainer for \"ded39f7a2f5a62e5b967f48908313ad64cb6caed0ce39fe31d06e6e597cb7ef1\"" Aug 13 00:23:07.915831 systemd[1]: Started cri-containerd-ded39f7a2f5a62e5b967f48908313ad64cb6caed0ce39fe31d06e6e597cb7ef1.scope - libcontainer container ded39f7a2f5a62e5b967f48908313ad64cb6caed0ce39fe31d06e6e597cb7ef1. Aug 13 00:23:07.949998 containerd[1728]: time="2025-08-13T00:23:07.949946706Z" level=info msg="StartContainer for \"ded39f7a2f5a62e5b967f48908313ad64cb6caed0ce39fe31d06e6e597cb7ef1\" returns successfully" Aug 13 00:23:08.293329 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:23:08.293444 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:23:08.428112 kubelet[3195]: I0813 00:23:08.428032 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lg6q7" podStartSLOduration=1.9501570099999999 podStartE2EDuration="16.428005751s" podCreationTimestamp="2025-08-13 00:22:52 +0000 UTC" firstStartedPulling="2025-08-13 00:22:53.347597564 +0000 UTC m=+25.737493135" lastFinishedPulling="2025-08-13 00:23:07.825446265 +0000 UTC m=+40.215341876" observedRunningTime="2025-08-13 00:23:08.315087373 +0000 UTC m=+40.704982984" watchObservedRunningTime="2025-08-13 00:23:08.428005751 +0000 UTC m=+40.817901362" Aug 13 00:23:08.430769 containerd[1728]: time="2025-08-13T00:23:08.430708277Z" level=info msg="StopPodSandbox for \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\"" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.526 [INFO][4427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.527 [INFO][4427] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" iface="eth0" netns="/var/run/netns/cni-b04fff80-fbaf-37d8-71be-4c2d132e402a" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.529 [INFO][4427] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" iface="eth0" netns="/var/run/netns/cni-b04fff80-fbaf-37d8-71be-4c2d132e402a" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.529 [INFO][4427] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" iface="eth0" netns="/var/run/netns/cni-b04fff80-fbaf-37d8-71be-4c2d132e402a" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.529 [INFO][4427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.529 [INFO][4427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.557 [INFO][4436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.557 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.557 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.565 [WARNING][4436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.565 [INFO][4436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.567 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:08.571669 containerd[1728]: 2025-08-13 00:23:08.570 [INFO][4427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:08.572422 containerd[1728]: time="2025-08-13T00:23:08.572371391Z" level=info msg="TearDown network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\" successfully" Aug 13 00:23:08.572422 containerd[1728]: time="2025-08-13T00:23:08.572416991Z" level=info msg="StopPodSandbox for \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\" returns successfully" Aug 13 00:23:08.648777 kubelet[3195]: I0813 00:23:08.648729 3195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whgjj\" (UniqueName: \"kubernetes.io/projected/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-kube-api-access-whgjj\") pod \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\" (UID: \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\") " Aug 13 00:23:08.648777 kubelet[3195]: I0813 00:23:08.648783 3195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-whisker-backend-key-pair\") pod \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\" (UID: \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\") " Aug 13 00:23:08.648935 kubelet[3195]: I0813 00:23:08.648805 3195 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-whisker-ca-bundle\") pod \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\" (UID: \"b4cd9644-0d9b-47e1-8c1b-e054338d94ce\") " Aug 13 00:23:08.649664 kubelet[3195]: I0813 00:23:08.649179 3195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b4cd9644-0d9b-47e1-8c1b-e054338d94ce" (UID: "b4cd9644-0d9b-47e1-8c1b-e054338d94ce"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:23:08.651678 kubelet[3195]: I0813 00:23:08.651387 3195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-kube-api-access-whgjj" (OuterVolumeSpecName: "kube-api-access-whgjj") pod "b4cd9644-0d9b-47e1-8c1b-e054338d94ce" (UID: "b4cd9644-0d9b-47e1-8c1b-e054338d94ce"). InnerVolumeSpecName "kube-api-access-whgjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:23:08.657341 kubelet[3195]: I0813 00:23:08.657291 3195 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b4cd9644-0d9b-47e1-8c1b-e054338d94ce" (UID: "b4cd9644-0d9b-47e1-8c1b-e054338d94ce"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:23:08.749694 kubelet[3195]: I0813 00:23:08.749632 3195 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whgjj\" (UniqueName: \"kubernetes.io/projected/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-kube-api-access-whgjj\") on node \"ci-4081.3.5-a-c1c2bc5336\" DevicePath \"\"" Aug 13 00:23:08.749694 kubelet[3195]: I0813 00:23:08.749686 3195 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-whisker-backend-key-pair\") on node \"ci-4081.3.5-a-c1c2bc5336\" DevicePath \"\"" Aug 13 00:23:08.749694 kubelet[3195]: I0813 00:23:08.749700 3195 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4cd9644-0d9b-47e1-8c1b-e054338d94ce-whisker-ca-bundle\") on node \"ci-4081.3.5-a-c1c2bc5336\" DevicePath \"\"" Aug 13 00:23:08.765859 systemd[1]: run-netns-cni\x2db04fff80\x2dfbaf\x2d37d8\x2d71be\x2d4c2d132e402a.mount: Deactivated successfully. Aug 13 00:23:08.765953 systemd[1]: var-lib-kubelet-pods-b4cd9644\x2d0d9b\x2d47e1\x2d8c1b\x2de054338d94ce-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:23:08.766012 systemd[1]: var-lib-kubelet-pods-b4cd9644\x2d0d9b\x2d47e1\x2d8c1b\x2de054338d94ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwhgjj.mount: Deactivated successfully. Aug 13 00:23:09.300205 systemd[1]: Removed slice kubepods-besteffort-podb4cd9644_0d9b_47e1_8c1b_e054338d94ce.slice - libcontainer container kubepods-besteffort-podb4cd9644_0d9b_47e1_8c1b_e054338d94ce.slice. Aug 13 00:23:09.368529 systemd[1]: Created slice kubepods-besteffort-pod82f63b2f_c213_4250_8d52_c269df2a36fe.slice - libcontainer container kubepods-besteffort-pod82f63b2f_c213_4250_8d52_c269df2a36fe.slice. Aug 13 00:23:09.452586 kubelet[3195]: I0813 00:23:09.452487 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82f63b2f-c213-4250-8d52-c269df2a36fe-whisker-backend-key-pair\") pod \"whisker-56566747f7-w96ww\" (UID: \"82f63b2f-c213-4250-8d52-c269df2a36fe\") " pod="calico-system/whisker-56566747f7-w96ww" Aug 13 00:23:09.452586 kubelet[3195]: I0813 00:23:09.452535 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76jv2\" (UniqueName: \"kubernetes.io/projected/82f63b2f-c213-4250-8d52-c269df2a36fe-kube-api-access-76jv2\") pod \"whisker-56566747f7-w96ww\" (UID: \"82f63b2f-c213-4250-8d52-c269df2a36fe\") " pod="calico-system/whisker-56566747f7-w96ww" Aug 13 00:23:09.452586 kubelet[3195]: I0813 00:23:09.452556 3195 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82f63b2f-c213-4250-8d52-c269df2a36fe-whisker-ca-bundle\") pod \"whisker-56566747f7-w96ww\" (UID: \"82f63b2f-c213-4250-8d52-c269df2a36fe\") " pod="calico-system/whisker-56566747f7-w96ww" Aug 13 00:23:09.672720 containerd[1728]: time="2025-08-13T00:23:09.672585439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56566747f7-w96ww,Uid:82f63b2f-c213-4250-8d52-c269df2a36fe,Namespace:calico-system,Attempt:0,}" Aug 13 00:23:10.104114 kubelet[3195]: I0813 00:23:10.104069 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4cd9644-0d9b-47e1-8c1b-e054338d94ce" path="/var/lib/kubelet/pods/b4cd9644-0d9b-47e1-8c1b-e054338d94ce/volumes" Aug 13 00:23:10.117687 kernel: bpftool[4578]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:23:10.138919 systemd-networkd[1586]: cali679a1b512a3: Link UP Aug 13 00:23:10.139108 systemd-networkd[1586]: cali679a1b512a3: Gained carrier Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.753 [INFO][4495] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.771 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0 whisker-56566747f7- calico-system 82f63b2f-c213-4250-8d52-c269df2a36fe 926 0 2025-08-13 00:23:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56566747f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-a-c1c2bc5336 whisker-56566747f7-w96ww eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali679a1b512a3 [] [] }} ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Namespace="calico-system" Pod="whisker-56566747f7-w96ww" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.771 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Namespace="calico-system" Pod="whisker-56566747f7-w96ww" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.821 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" HandleID="k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.821 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" HandleID="k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-c1c2bc5336", "pod":"whisker-56566747f7-w96ww", "timestamp":"2025-08-13 00:23:09.820928566 +0000 UTC"}, Hostname:"ci-4081.3.5-a-c1c2bc5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.821 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.821 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.821 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-c1c2bc5336' Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.831 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.844 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.853 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.855 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.857 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.857 [INFO][4537] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.859 [INFO][4537] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7 Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.864 [INFO][4537] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.881 [INFO][4537] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.1/26] block=192.168.32.0/26 handle="k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.882 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.1/26] handle="k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.882 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:10.170439 containerd[1728]: 2025-08-13 00:23:09.882 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.1/26] IPv6=[] ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" HandleID="k8s-pod-network.d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" Aug 13 00:23:10.171722 containerd[1728]: 2025-08-13 00:23:09.884 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Namespace="calico-system" Pod="whisker-56566747f7-w96ww" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0", GenerateName:"whisker-56566747f7-", Namespace:"calico-system", SelfLink:"", UID:"82f63b2f-c213-4250-8d52-c269df2a36fe", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56566747f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"", Pod:"whisker-56566747f7-w96ww", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali679a1b512a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:10.171722 containerd[1728]: 2025-08-13 00:23:09.884 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.1/32] ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Namespace="calico-system" Pod="whisker-56566747f7-w96ww" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" Aug 13 00:23:10.171722 containerd[1728]: 2025-08-13 00:23:09.913 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali679a1b512a3 ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Namespace="calico-system" Pod="whisker-56566747f7-w96ww" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" Aug 13 00:23:10.171722 containerd[1728]: 2025-08-13 00:23:10.140 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Namespace="calico-system" Pod="whisker-56566747f7-w96ww" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" Aug 13 00:23:10.171722 containerd[1728]: 2025-08-13 00:23:10.142 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Namespace="calico-system" Pod="whisker-56566747f7-w96ww" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0", GenerateName:"whisker-56566747f7-", Namespace:"calico-system", SelfLink:"", UID:"82f63b2f-c213-4250-8d52-c269df2a36fe", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56566747f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7", Pod:"whisker-56566747f7-w96ww", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali679a1b512a3", MAC:"1e:0f:ef:2e:67:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:10.171722 containerd[1728]: 2025-08-13 00:23:10.168 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7" Namespace="calico-system" Pod="whisker-56566747f7-w96ww" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--56566747f7--w96ww-eth0" Aug 13 00:23:10.192281 containerd[1728]: time="2025-08-13T00:23:10.191709564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:10.192281 containerd[1728]: time="2025-08-13T00:23:10.192145285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:10.192281 containerd[1728]: time="2025-08-13T00:23:10.192167605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:10.192612 containerd[1728]: time="2025-08-13T00:23:10.192527405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:10.221853 systemd[1]: Started cri-containerd-d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7.scope - libcontainer container d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7. Aug 13 00:23:10.256594 containerd[1728]: time="2025-08-13T00:23:10.256528329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56566747f7-w96ww,Uid:82f63b2f-c213-4250-8d52-c269df2a36fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7\"" Aug 13 00:23:10.260130 containerd[1728]: time="2025-08-13T00:23:10.260091416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:23:10.419992 systemd-networkd[1586]: vxlan.calico: Link UP Aug 13 00:23:10.420006 systemd-networkd[1586]: vxlan.calico: Gained carrier Aug 13 00:23:11.330510 containerd[1728]: time="2025-08-13T00:23:11.330109527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:11.334395 containerd[1728]: time="2025-08-13T00:23:11.334356935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:23:11.344497 containerd[1728]: time="2025-08-13T00:23:11.343452072Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:11.350324 containerd[1728]: time="2025-08-13T00:23:11.350234166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:11.351581 containerd[1728]: time="2025-08-13T00:23:11.350859287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.090724351s" Aug 13 00:23:11.351581 containerd[1728]: time="2025-08-13T00:23:11.350895927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:23:11.353494 containerd[1728]: time="2025-08-13T00:23:11.353348692Z" level=info msg="CreateContainer within sandbox \"d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:23:11.388253 containerd[1728]: time="2025-08-13T00:23:11.388197039Z" level=info msg="CreateContainer within sandbox \"d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8af5ed897dc9a191afcc57c79a64a2d68319a192c768be21fe284a6209890e54\"" Aug 13 00:23:11.389557 containerd[1728]: time="2025-08-13T00:23:11.389505802Z" level=info msg="StartContainer for \"8af5ed897dc9a191afcc57c79a64a2d68319a192c768be21fe284a6209890e54\"" Aug 13 00:23:11.425833 systemd[1]: Started cri-containerd-8af5ed897dc9a191afcc57c79a64a2d68319a192c768be21fe284a6209890e54.scope - libcontainer container 8af5ed897dc9a191afcc57c79a64a2d68319a192c768be21fe284a6209890e54. Aug 13 00:23:11.463165 containerd[1728]: time="2025-08-13T00:23:11.462151502Z" level=info msg="StartContainer for \"8af5ed897dc9a191afcc57c79a64a2d68319a192c768be21fe284a6209890e54\" returns successfully" Aug 13 00:23:11.464445 containerd[1728]: time="2025-08-13T00:23:11.464402706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:23:11.471109 systemd-networkd[1586]: vxlan.calico: Gained IPv6LL Aug 13 00:23:11.661915 systemd-networkd[1586]: cali679a1b512a3: Gained IPv6LL Aug 13 00:23:14.115592 containerd[1728]: time="2025-08-13T00:23:14.115544477Z" level=info msg="StopPodSandbox for \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\"" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.178 [INFO][4770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.179 [INFO][4770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" iface="eth0" netns="/var/run/netns/cni-81a5770c-bdb4-2638-0df6-d2eb10776ebf" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.179 [INFO][4770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" iface="eth0" netns="/var/run/netns/cni-81a5770c-bdb4-2638-0df6-d2eb10776ebf" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.180 [INFO][4770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" iface="eth0" netns="/var/run/netns/cni-81a5770c-bdb4-2638-0df6-d2eb10776ebf" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.180 [INFO][4770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.180 [INFO][4770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.212 [INFO][4777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.213 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.213 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.223 [WARNING][4777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.223 [INFO][4777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.225 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:14.230665 containerd[1728]: 2025-08-13 00:23:14.226 [INFO][4770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:14.233708 containerd[1728]: time="2025-08-13T00:23:14.231810862Z" level=info msg="TearDown network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\" successfully" Aug 13 00:23:14.233708 containerd[1728]: time="2025-08-13T00:23:14.231843982Z" level=info msg="StopPodSandbox for \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\" returns successfully" Aug 13 00:23:14.233722 systemd[1]: run-netns-cni\x2d81a5770c\x2dbdb4\x2d2638\x2d0df6\x2dd2eb10776ebf.mount: Deactivated successfully. Aug 13 00:23:14.236072 containerd[1728]: time="2025-08-13T00:23:14.235887350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wvtl5,Uid:9cfaf342-f7c6-417f-a4c3-39fca511cded,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:14.478743 systemd-networkd[1586]: cali41dc607c139: Link UP Aug 13 00:23:14.480471 systemd-networkd[1586]: cali41dc607c139: Gained carrier Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.356 [INFO][4784] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0 goldmane-58fd7646b9- calico-system 9cfaf342-f7c6-417f-a4c3-39fca511cded 944 0 2025-08-13 00:22:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-a-c1c2bc5336 goldmane-58fd7646b9-wvtl5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali41dc607c139 [] [] }} ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Namespace="calico-system" Pod="goldmane-58fd7646b9-wvtl5" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.356 [INFO][4784] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Namespace="calico-system" Pod="goldmane-58fd7646b9-wvtl5" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.406 [INFO][4797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" HandleID="k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.406 [INFO][4797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" HandleID="k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d37b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-c1c2bc5336", "pod":"goldmane-58fd7646b9-wvtl5", "timestamp":"2025-08-13 00:23:14.40661752 +0000 UTC"}, Hostname:"ci-4081.3.5-a-c1c2bc5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.406 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.406 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.406 [INFO][4797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-c1c2bc5336' Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.418 [INFO][4797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.428 [INFO][4797] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.436 [INFO][4797] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.440 [INFO][4797] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.444 [INFO][4797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.444 [INFO][4797] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.448 [INFO][4797] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666 Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.459 [INFO][4797] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.469 [INFO][4797] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.2/26] block=192.168.32.0/26 handle="k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.469 [INFO][4797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.2/26] handle="k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.470 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:14.512136 containerd[1728]: 2025-08-13 00:23:14.470 [INFO][4797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.2/26] IPv6=[] ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" HandleID="k8s-pod-network.44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.512724 containerd[1728]: 2025-08-13 00:23:14.473 [INFO][4784] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Namespace="calico-system" Pod="goldmane-58fd7646b9-wvtl5" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"9cfaf342-f7c6-417f-a4c3-39fca511cded", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"", Pod:"goldmane-58fd7646b9-wvtl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41dc607c139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:14.512724 containerd[1728]: 2025-08-13 00:23:14.473 [INFO][4784] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.2/32] ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Namespace="calico-system" Pod="goldmane-58fd7646b9-wvtl5" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.512724 containerd[1728]: 2025-08-13 00:23:14.473 [INFO][4784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41dc607c139 ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Namespace="calico-system" Pod="goldmane-58fd7646b9-wvtl5" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.512724 containerd[1728]: 2025-08-13 00:23:14.481 [INFO][4784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Namespace="calico-system" Pod="goldmane-58fd7646b9-wvtl5" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.512724 containerd[1728]: 2025-08-13 00:23:14.482 [INFO][4784] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Namespace="calico-system" Pod="goldmane-58fd7646b9-wvtl5" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"9cfaf342-f7c6-417f-a4c3-39fca511cded", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666", Pod:"goldmane-58fd7646b9-wvtl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41dc607c139", MAC:"96:99:1f:db:e7:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:14.512724 containerd[1728]: 2025-08-13 00:23:14.499 [INFO][4784] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666" Namespace="calico-system" Pod="goldmane-58fd7646b9-wvtl5" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:14.545668 containerd[1728]: time="2025-08-13T00:23:14.544578667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:14.545668 containerd[1728]: time="2025-08-13T00:23:14.544693148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:14.545668 containerd[1728]: time="2025-08-13T00:23:14.544727108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:14.545668 containerd[1728]: time="2025-08-13T00:23:14.544896668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:14.568846 systemd[1]: Started cri-containerd-44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666.scope - libcontainer container 44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666. Aug 13 00:23:14.623532 containerd[1728]: time="2025-08-13T00:23:14.623486380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-wvtl5,Uid:9cfaf342-f7c6-417f-a4c3-39fca511cded,Namespace:calico-system,Attempt:1,} returns sandbox id \"44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666\"" Aug 13 00:23:14.803940 containerd[1728]: time="2025-08-13T00:23:14.803818129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:14.806315 containerd[1728]: time="2025-08-13T00:23:14.806279694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:23:14.809174 containerd[1728]: time="2025-08-13T00:23:14.809121859Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:14.812810 containerd[1728]: time="2025-08-13T00:23:14.812758386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:14.813596 containerd[1728]: time="2025-08-13T00:23:14.813552348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 3.348999561s" Aug 13 00:23:14.813596 containerd[1728]: time="2025-08-13T00:23:14.813590628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:23:14.816150 containerd[1728]: time="2025-08-13T00:23:14.816108753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:23:14.819869 containerd[1728]: time="2025-08-13T00:23:14.819826000Z" level=info msg="CreateContainer within sandbox \"d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:23:14.848337 containerd[1728]: time="2025-08-13T00:23:14.848287535Z" level=info msg="CreateContainer within sandbox \"d220e40fd96abe24c0f7dc5d3f3bade1ec6b4765b032d5157ab203b21955afb7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"72bac12554561f9c2be7163f109bb960e82f28fdde490eebce612be641d9ced4\"" Aug 13 00:23:14.850173 containerd[1728]: time="2025-08-13T00:23:14.848993897Z" level=info msg="StartContainer for \"72bac12554561f9c2be7163f109bb960e82f28fdde490eebce612be641d9ced4\"" Aug 13 00:23:14.875829 systemd[1]: Started cri-containerd-72bac12554561f9c2be7163f109bb960e82f28fdde490eebce612be641d9ced4.scope - libcontainer container 72bac12554561f9c2be7163f109bb960e82f28fdde490eebce612be641d9ced4. Aug 13 00:23:14.915676 containerd[1728]: time="2025-08-13T00:23:14.914460943Z" level=info msg="StartContainer for \"72bac12554561f9c2be7163f109bb960e82f28fdde490eebce612be641d9ced4\" returns successfully" Aug 13 00:23:15.105761 containerd[1728]: time="2025-08-13T00:23:15.101505665Z" level=info msg="StopPodSandbox for \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\"" Aug 13 00:23:15.105761 containerd[1728]: time="2025-08-13T00:23:15.101808066Z" level=info msg="StopPodSandbox for \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\"" Aug 13 00:23:15.112845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286515122.mount: Deactivated successfully. Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.174 [INFO][4912] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.176 [INFO][4912] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" iface="eth0" netns="/var/run/netns/cni-77546edc-8e87-e5e9-c52b-010b0428f1bc" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.177 [INFO][4912] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" iface="eth0" netns="/var/run/netns/cni-77546edc-8e87-e5e9-c52b-010b0428f1bc" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.177 [INFO][4912] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" iface="eth0" netns="/var/run/netns/cni-77546edc-8e87-e5e9-c52b-010b0428f1bc" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.177 [INFO][4912] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.177 [INFO][4912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.202 [INFO][4929] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.202 [INFO][4929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.202 [INFO][4929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.210 [WARNING][4929] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.210 [INFO][4929] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.212 [INFO][4929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:15.215906 containerd[1728]: 2025-08-13 00:23:15.213 [INFO][4912] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:15.218753 containerd[1728]: time="2025-08-13T00:23:15.216743248Z" level=info msg="TearDown network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\" successfully" Aug 13 00:23:15.218753 containerd[1728]: time="2025-08-13T00:23:15.216777408Z" level=info msg="StopPodSandbox for \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\" returns successfully" Aug 13 00:23:15.219054 systemd[1]: run-netns-cni\x2d77546edc\x2d8e87\x2de5e9\x2dc52b\x2d010b0428f1bc.mount: Deactivated successfully. Aug 13 00:23:15.220515 containerd[1728]: time="2025-08-13T00:23:15.220168135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b596cbbd-dc5df,Uid:ef19092f-3405-456e-849a-6f31b80acace,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.176 [INFO][4919] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.176 [INFO][4919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" iface="eth0" netns="/var/run/netns/cni-4e4fc2a5-7175-8a19-8427-0ee8eb6a2378" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.177 [INFO][4919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" iface="eth0" netns="/var/run/netns/cni-4e4fc2a5-7175-8a19-8427-0ee8eb6a2378" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.180 [INFO][4919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" iface="eth0" netns="/var/run/netns/cni-4e4fc2a5-7175-8a19-8427-0ee8eb6a2378" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.180 [INFO][4919] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.180 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.208 [INFO][4934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.208 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.212 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.226 [WARNING][4934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.226 [INFO][4934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.233 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:15.237550 containerd[1728]: 2025-08-13 00:23:15.234 [INFO][4919] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:15.238373 containerd[1728]: time="2025-08-13T00:23:15.237749289Z" level=info msg="TearDown network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\" successfully" Aug 13 00:23:15.238373 containerd[1728]: time="2025-08-13T00:23:15.237776529Z" level=info msg="StopPodSandbox for \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\" returns successfully" Aug 13 00:23:15.242137 systemd[1]: run-netns-cni\x2d4e4fc2a5\x2d7175\x2d8a19\x2d8427\x2d0ee8eb6a2378.mount: Deactivated successfully. Aug 13 00:23:15.244975 containerd[1728]: time="2025-08-13T00:23:15.244757983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b596cbbd-vbl26,Uid:4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:23:15.352270 kubelet[3195]: I0813 00:23:15.352178 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-56566747f7-w96ww" podStartSLOduration=1.7952898510000002 podStartE2EDuration="6.35215899s" podCreationTimestamp="2025-08-13 00:23:09 +0000 UTC" firstStartedPulling="2025-08-13 00:23:10.258042572 +0000 UTC m=+42.647938143" lastFinishedPulling="2025-08-13 00:23:14.814911671 +0000 UTC m=+47.204807282" observedRunningTime="2025-08-13 00:23:15.35198695 +0000 UTC m=+47.741882561" watchObservedRunningTime="2025-08-13 00:23:15.35215899 +0000 UTC m=+47.742054601" Aug 13 00:23:15.418278 systemd-networkd[1586]: cali5e96a0cf314: Link UP Aug 13 00:23:15.419607 systemd-networkd[1586]: cali5e96a0cf314: Gained carrier Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.300 [INFO][4942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0 calico-apiserver-7b596cbbd- calico-apiserver ef19092f-3405-456e-849a-6f31b80acace 956 0 2025-08-13 00:22:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b596cbbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-c1c2bc5336 calico-apiserver-7b596cbbd-dc5df eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5e96a0cf314 [] [] }} ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-dc5df" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.302 [INFO][4942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-dc5df" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.347 [INFO][4965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" HandleID="k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.347 [INFO][4965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" HandleID="k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-c1c2bc5336", "pod":"calico-apiserver-7b596cbbd-dc5df", "timestamp":"2025-08-13 00:23:15.347230501 +0000 UTC"}, Hostname:"ci-4081.3.5-a-c1c2bc5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.348 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.348 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.348 [INFO][4965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-c1c2bc5336' Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.374 [INFO][4965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.386 [INFO][4965] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.391 [INFO][4965] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.392 [INFO][4965] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.395 [INFO][4965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.395 [INFO][4965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.396 [INFO][4965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.403 [INFO][4965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.410 [INFO][4965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.3/26] block=192.168.32.0/26 handle="k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.410 [INFO][4965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.3/26] handle="k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.410 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:15.449339 containerd[1728]: 2025-08-13 00:23:15.411 [INFO][4965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.3/26] IPv6=[] ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" HandleID="k8s-pod-network.ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.450377 containerd[1728]: 2025-08-13 00:23:15.413 [INFO][4942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-dc5df" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0", GenerateName:"calico-apiserver-7b596cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef19092f-3405-456e-849a-6f31b80acace", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b596cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"", Pod:"calico-apiserver-7b596cbbd-dc5df", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e96a0cf314", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:15.450377 containerd[1728]: 2025-08-13 00:23:15.413 [INFO][4942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.3/32] ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-dc5df" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.450377 containerd[1728]: 2025-08-13 00:23:15.413 [INFO][4942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e96a0cf314 ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-dc5df" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.450377 containerd[1728]: 2025-08-13 00:23:15.421 [INFO][4942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-dc5df" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.450377 containerd[1728]: 2025-08-13 00:23:15.421 [INFO][4942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-dc5df" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0", GenerateName:"calico-apiserver-7b596cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef19092f-3405-456e-849a-6f31b80acace", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b596cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c", Pod:"calico-apiserver-7b596cbbd-dc5df", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e96a0cf314", MAC:"0a:50:da:e2:82:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:15.450377 containerd[1728]: 2025-08-13 00:23:15.447 [INFO][4942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-dc5df" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:15.474021 containerd[1728]: time="2025-08-13T00:23:15.473883866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:15.474021 containerd[1728]: time="2025-08-13T00:23:15.473946426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:15.474021 containerd[1728]: time="2025-08-13T00:23:15.473972546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:15.474660 containerd[1728]: time="2025-08-13T00:23:15.474070466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:15.494881 systemd[1]: Started cri-containerd-ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c.scope - libcontainer container ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c. Aug 13 00:23:15.537362 systemd-networkd[1586]: cali04d0a83608c: Link UP Aug 13 00:23:15.537555 systemd-networkd[1586]: cali04d0a83608c: Gained carrier Aug 13 00:23:15.556323 containerd[1728]: time="2025-08-13T00:23:15.556187985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b596cbbd-dc5df,Uid:ef19092f-3405-456e-849a-6f31b80acace,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c\"" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.338 [INFO][4954] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0 calico-apiserver-7b596cbbd- calico-apiserver 4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a 957 0 2025-08-13 00:22:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b596cbbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-a-c1c2bc5336 calico-apiserver-7b596cbbd-vbl26 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali04d0a83608c [] [] }} ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-vbl26" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.338 [INFO][4954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-vbl26" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.393 [INFO][4973] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" HandleID="k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.394 [INFO][4973] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" HandleID="k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2f20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-a-c1c2bc5336", "pod":"calico-apiserver-7b596cbbd-vbl26", "timestamp":"2025-08-13 00:23:15.393944311 +0000 UTC"}, Hostname:"ci-4081.3.5-a-c1c2bc5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.394 [INFO][4973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.410 [INFO][4973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.410 [INFO][4973] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-c1c2bc5336' Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.474 [INFO][4973] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.485 [INFO][4973] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.493 [INFO][4973] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.495 [INFO][4973] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.498 [INFO][4973] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.498 [INFO][4973] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.501 [INFO][4973] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3 Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.505 [INFO][4973] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.519 [INFO][4973] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.4/26] block=192.168.32.0/26 handle="k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.519 [INFO][4973] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.4/26] handle="k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.519 [INFO][4973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:15.566163 containerd[1728]: 2025-08-13 00:23:15.519 [INFO][4973] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.4/26] IPv6=[] ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" HandleID="k8s-pod-network.e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.566753 containerd[1728]: 2025-08-13 00:23:15.532 [INFO][4954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-vbl26" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0", GenerateName:"calico-apiserver-7b596cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b596cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"", Pod:"calico-apiserver-7b596cbbd-vbl26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04d0a83608c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:15.566753 containerd[1728]: 2025-08-13 00:23:15.533 [INFO][4954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.4/32] ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-vbl26" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.566753 containerd[1728]: 2025-08-13 00:23:15.533 [INFO][4954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04d0a83608c ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-vbl26" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.566753 containerd[1728]: 2025-08-13 00:23:15.536 [INFO][4954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-vbl26" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.566753 containerd[1728]: 2025-08-13 00:23:15.538 [INFO][4954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-vbl26" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0", GenerateName:"calico-apiserver-7b596cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b596cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3", Pod:"calico-apiserver-7b596cbbd-vbl26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04d0a83608c", MAC:"a6:d8:dd:30:04:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:15.566753 containerd[1728]: 2025-08-13 00:23:15.563 [INFO][4954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3" Namespace="calico-apiserver" Pod="calico-apiserver-7b596cbbd-vbl26" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:15.584784 containerd[1728]: time="2025-08-13T00:23:15.584469720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:15.584784 containerd[1728]: time="2025-08-13T00:23:15.584521600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:15.584784 containerd[1728]: time="2025-08-13T00:23:15.584532160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:15.584784 containerd[1728]: time="2025-08-13T00:23:15.584614360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:15.601854 systemd[1]: Started cri-containerd-e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3.scope - libcontainer container e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3. Aug 13 00:23:15.633384 containerd[1728]: time="2025-08-13T00:23:15.633325575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b596cbbd-vbl26,Uid:4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3\"" Aug 13 00:23:16.461916 systemd-networkd[1586]: cali41dc607c139: Gained IPv6LL Aug 13 00:23:17.102059 containerd[1728]: time="2025-08-13T00:23:17.101665577Z" level=info msg="StopPodSandbox for \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\"" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.150 [INFO][5093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.151 [INFO][5093] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" iface="eth0" netns="/var/run/netns/cni-01e1b1ac-94bd-d5ee-1752-e6e947b6de90" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.151 [INFO][5093] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" iface="eth0" netns="/var/run/netns/cni-01e1b1ac-94bd-d5ee-1752-e6e947b6de90" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.154 [INFO][5093] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" iface="eth0" netns="/var/run/netns/cni-01e1b1ac-94bd-d5ee-1752-e6e947b6de90" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.154 [INFO][5093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.154 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.172 [INFO][5100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.172 [INFO][5100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.172 [INFO][5100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.186 [WARNING][5100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.186 [INFO][5100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.188 [INFO][5100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:17.192539 containerd[1728]: 2025-08-13 00:23:17.190 [INFO][5093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:17.194776 containerd[1728]: time="2025-08-13T00:23:17.193044513Z" level=info msg="TearDown network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\" successfully" Aug 13 00:23:17.194776 containerd[1728]: time="2025-08-13T00:23:17.193120714Z" level=info msg="StopPodSandbox for \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\" returns successfully" Aug 13 00:23:17.194322 systemd[1]: run-netns-cni\x2d01e1b1ac\x2d94bd\x2dd5ee\x2d1752\x2de6e947b6de90.mount: Deactivated successfully. Aug 13 00:23:17.195783 containerd[1728]: time="2025-08-13T00:23:17.195457358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6v4g,Uid:f75dc80d-3a70-4464-bb0d-78154b4f7aab,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:17.231491 systemd-networkd[1586]: cali04d0a83608c: Gained IPv6LL Aug 13 00:23:17.333341 systemd-networkd[1586]: cali1f21a70f2c5: Link UP Aug 13 00:23:17.334585 systemd-networkd[1586]: cali1f21a70f2c5: Gained carrier Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.263 [INFO][5110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0 csi-node-driver- calico-system f75dc80d-3a70-4464-bb0d-78154b4f7aab 980 0 2025-08-13 00:22:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-a-c1c2bc5336 csi-node-driver-l6v4g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1f21a70f2c5 [] [] }} ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Namespace="calico-system" Pod="csi-node-driver-l6v4g" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.263 [INFO][5110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Namespace="calico-system" Pod="csi-node-driver-l6v4g" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.289 [INFO][5118] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" HandleID="k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.290 [INFO][5118] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" HandleID="k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-c1c2bc5336", "pod":"csi-node-driver-l6v4g", "timestamp":"2025-08-13 00:23:17.289851181 +0000 UTC"}, Hostname:"ci-4081.3.5-a-c1c2bc5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.290 [INFO][5118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.290 [INFO][5118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.290 [INFO][5118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-c1c2bc5336' Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.300 [INFO][5118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.304 [INFO][5118] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.308 [INFO][5118] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.309 [INFO][5118] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.311 [INFO][5118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.312 [INFO][5118] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.313 [INFO][5118] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803 Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.317 [INFO][5118] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.325 [INFO][5118] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.5/26] block=192.168.32.0/26 handle="k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.325 [INFO][5118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.5/26] handle="k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.325 [INFO][5118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:17.357557 containerd[1728]: 2025-08-13 00:23:17.326 [INFO][5118] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.5/26] IPv6=[] ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" HandleID="k8s-pod-network.5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.358778 containerd[1728]: 2025-08-13 00:23:17.328 [INFO][5110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Namespace="calico-system" Pod="csi-node-driver-l6v4g" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f75dc80d-3a70-4464-bb0d-78154b4f7aab", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"", Pod:"csi-node-driver-l6v4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f21a70f2c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:17.358778 containerd[1728]: 2025-08-13 00:23:17.329 [INFO][5110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.5/32] ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Namespace="calico-system" Pod="csi-node-driver-l6v4g" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.358778 containerd[1728]: 2025-08-13 00:23:17.329 [INFO][5110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f21a70f2c5 ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Namespace="calico-system" Pod="csi-node-driver-l6v4g" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.358778 containerd[1728]: 2025-08-13 00:23:17.335 [INFO][5110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Namespace="calico-system" Pod="csi-node-driver-l6v4g" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.358778 containerd[1728]: 2025-08-13 00:23:17.338 [INFO][5110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Namespace="calico-system" Pod="csi-node-driver-l6v4g" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f75dc80d-3a70-4464-bb0d-78154b4f7aab", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803", Pod:"csi-node-driver-l6v4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f21a70f2c5", MAC:"1e:75:3e:e2:2c:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:17.358778 containerd[1728]: 2025-08-13 00:23:17.354 [INFO][5110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803" Namespace="calico-system" Pod="csi-node-driver-l6v4g" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:17.379551 containerd[1728]: time="2025-08-13T00:23:17.379403114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:17.379958 containerd[1728]: time="2025-08-13T00:23:17.379776195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:17.379958 containerd[1728]: time="2025-08-13T00:23:17.379835515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:17.380061 containerd[1728]: time="2025-08-13T00:23:17.379943795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:17.404898 systemd[1]: Started cri-containerd-5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803.scope - libcontainer container 5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803. Aug 13 00:23:17.422150 systemd-networkd[1586]: cali5e96a0cf314: Gained IPv6LL Aug 13 00:23:17.429145 containerd[1728]: time="2025-08-13T00:23:17.429005730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l6v4g,Uid:f75dc80d-3a70-4464-bb0d-78154b4f7aab,Namespace:calico-system,Attempt:1,} returns sandbox id \"5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803\"" Aug 13 00:23:18.104778 containerd[1728]: time="2025-08-13T00:23:18.103031315Z" level=info msg="StopPodSandbox for \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\"" Aug 13 00:23:18.104778 containerd[1728]: time="2025-08-13T00:23:18.104148757Z" level=info msg="StopPodSandbox for \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\"" Aug 13 00:23:18.108118 containerd[1728]: time="2025-08-13T00:23:18.108092924Z" level=info msg="StopPodSandbox for \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\"" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.258 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.258 [INFO][5211] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" iface="eth0" netns="/var/run/netns/cni-a34983e4-680f-b876-2c5d-e1f74fc5cca7" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.259 [INFO][5211] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" iface="eth0" netns="/var/run/netns/cni-a34983e4-680f-b876-2c5d-e1f74fc5cca7" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.265 [INFO][5211] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" iface="eth0" netns="/var/run/netns/cni-a34983e4-680f-b876-2c5d-e1f74fc5cca7" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.266 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.266 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.335 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.335 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.339 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.358 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.358 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.364 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:18.372611 containerd[1728]: 2025-08-13 00:23:18.367 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:18.377967 containerd[1728]: time="2025-08-13T00:23:18.377909847Z" level=info msg="TearDown network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\" successfully" Aug 13 00:23:18.377967 containerd[1728]: time="2025-08-13T00:23:18.377958287Z" level=info msg="StopPodSandbox for \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\" returns successfully" Aug 13 00:23:18.379001 containerd[1728]: time="2025-08-13T00:23:18.378583968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ddbf5f64-sh7tp,Uid:eacb4a1e-81fe-44e2-8375-85476e370ebd,Namespace:calico-system,Attempt:1,}" Aug 13 00:23:18.380495 systemd[1]: run-netns-cni\x2da34983e4\x2d680f\x2db876\x2d2c5d\x2de1f74fc5cca7.mount: Deactivated successfully. Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.263 [INFO][5206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.263 [INFO][5206] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" iface="eth0" netns="/var/run/netns/cni-f6b7c8aa-b929-d2e4-6593-c2575954cd6e" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.264 [INFO][5206] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" iface="eth0" netns="/var/run/netns/cni-f6b7c8aa-b929-d2e4-6593-c2575954cd6e" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.266 [INFO][5206] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" iface="eth0" netns="/var/run/netns/cni-f6b7c8aa-b929-d2e4-6593-c2575954cd6e" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.266 [INFO][5206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.266 [INFO][5206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.351 [INFO][5228] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.351 [INFO][5228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.364 [INFO][5228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.378 [WARNING][5228] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.378 [INFO][5228] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.381 [INFO][5228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:18.390531 containerd[1728]: 2025-08-13 00:23:18.387 [INFO][5206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:18.393010 containerd[1728]: time="2025-08-13T00:23:18.390752271Z" level=info msg="TearDown network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\" successfully" Aug 13 00:23:18.393010 containerd[1728]: time="2025-08-13T00:23:18.390776591Z" level=info msg="StopPodSandbox for \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\" returns successfully" Aug 13 00:23:18.394366 containerd[1728]: time="2025-08-13T00:23:18.394295638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gm4v8,Uid:4ed9f975-b4fd-44f7-a88d-65b130cbc3e0,Namespace:kube-system,Attempt:1,}" Aug 13 00:23:18.395336 systemd[1]: run-netns-cni\x2df6b7c8aa\x2db929\x2dd2e4\x2d6593\x2dc2575954cd6e.mount: Deactivated successfully. Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.289 [INFO][5208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.289 [INFO][5208] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" iface="eth0" netns="/var/run/netns/cni-0903b6e1-4635-2c66-f592-7465c4830f42" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.290 [INFO][5208] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" iface="eth0" netns="/var/run/netns/cni-0903b6e1-4635-2c66-f592-7465c4830f42" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.293 [INFO][5208] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" iface="eth0" netns="/var/run/netns/cni-0903b6e1-4635-2c66-f592-7465c4830f42" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.293 [INFO][5208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.293 [INFO][5208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.367 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.367 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.382 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.401 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.402 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.403 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:18.420792 containerd[1728]: 2025-08-13 00:23:18.406 [INFO][5208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:18.421783 containerd[1728]: time="2025-08-13T00:23:18.421148490Z" level=info msg="TearDown network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\" successfully" Aug 13 00:23:18.421960 containerd[1728]: time="2025-08-13T00:23:18.421858532Z" level=info msg="StopPodSandbox for \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\" returns successfully" Aug 13 00:23:18.423359 containerd[1728]: time="2025-08-13T00:23:18.423311734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2x28l,Uid:4056208b-d9c3-4786-99a2-567d10cf8d83,Namespace:kube-system,Attempt:1,}" Aug 13 00:23:18.573882 systemd-networkd[1586]: cali1f21a70f2c5: Gained IPv6LL Aug 13 00:23:18.632327 systemd-networkd[1586]: calide13efb0ea7: Link UP Aug 13 00:23:18.632478 systemd-networkd[1586]: calide13efb0ea7: Gained carrier Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.493 [INFO][5246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0 calico-kube-controllers-76ddbf5f64- calico-system eacb4a1e-81fe-44e2-8375-85476e370ebd 989 0 2025-08-13 00:22:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76ddbf5f64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-a-c1c2bc5336 calico-kube-controllers-76ddbf5f64-sh7tp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calide13efb0ea7 [] [] }} ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Namespace="calico-system" Pod="calico-kube-controllers-76ddbf5f64-sh7tp" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.494 [INFO][5246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Namespace="calico-system" Pod="calico-kube-controllers-76ddbf5f64-sh7tp" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.531 [INFO][5280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" HandleID="k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.531 [INFO][5280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" HandleID="k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-a-c1c2bc5336", "pod":"calico-kube-controllers-76ddbf5f64-sh7tp", "timestamp":"2025-08-13 00:23:18.531552064 +0000 UTC"}, Hostname:"ci-4081.3.5-a-c1c2bc5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.531 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.532 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.532 [INFO][5280] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-c1c2bc5336' Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.543 [INFO][5280] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.550 [INFO][5280] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.561 [INFO][5280] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.564 [INFO][5280] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.569 [INFO][5280] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.570 [INFO][5280] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.575 [INFO][5280] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.592 [INFO][5280] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.623 [INFO][5280] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.6/26] block=192.168.32.0/26 handle="k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.623 [INFO][5280] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.6/26] handle="k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.624 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:18.669971 containerd[1728]: 2025-08-13 00:23:18.624 [INFO][5280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.6/26] IPv6=[] ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" HandleID="k8s-pod-network.5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.670508 containerd[1728]: 2025-08-13 00:23:18.628 [INFO][5246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Namespace="calico-system" Pod="calico-kube-controllers-76ddbf5f64-sh7tp" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0", GenerateName:"calico-kube-controllers-76ddbf5f64-", Namespace:"calico-system", SelfLink:"", UID:"eacb4a1e-81fe-44e2-8375-85476e370ebd", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ddbf5f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"", Pod:"calico-kube-controllers-76ddbf5f64-sh7tp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide13efb0ea7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:18.670508 containerd[1728]: 2025-08-13 00:23:18.628 [INFO][5246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.6/32] ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Namespace="calico-system" Pod="calico-kube-controllers-76ddbf5f64-sh7tp" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.670508 containerd[1728]: 2025-08-13 00:23:18.628 [INFO][5246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide13efb0ea7 ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Namespace="calico-system" Pod="calico-kube-controllers-76ddbf5f64-sh7tp" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.670508 containerd[1728]: 2025-08-13 00:23:18.632 [INFO][5246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Namespace="calico-system" Pod="calico-kube-controllers-76ddbf5f64-sh7tp" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.670508 containerd[1728]: 2025-08-13 00:23:18.640 [INFO][5246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Namespace="calico-system" Pod="calico-kube-controllers-76ddbf5f64-sh7tp" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0", GenerateName:"calico-kube-controllers-76ddbf5f64-", Namespace:"calico-system", SelfLink:"", UID:"eacb4a1e-81fe-44e2-8375-85476e370ebd", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ddbf5f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a", Pod:"calico-kube-controllers-76ddbf5f64-sh7tp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide13efb0ea7", MAC:"0e:16:d0:2b:d9:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:18.670508 containerd[1728]: 2025-08-13 00:23:18.664 [INFO][5246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a" Namespace="calico-system" Pod="calico-kube-controllers-76ddbf5f64-sh7tp" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:18.738372 systemd-networkd[1586]: cali24c1ea57ef5: Link UP Aug 13 00:23:18.740110 containerd[1728]: time="2025-08-13T00:23:18.739304186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:18.740110 containerd[1728]: time="2025-08-13T00:23:18.739366986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:18.740110 containerd[1728]: time="2025-08-13T00:23:18.739383066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:18.740508 systemd-networkd[1586]: cali24c1ea57ef5: Gained carrier Aug 13 00:23:18.741137 containerd[1728]: time="2025-08-13T00:23:18.740258948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:18.769074 systemd[1]: Started cri-containerd-5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a.scope - libcontainer container 5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a. Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.533 [INFO][5259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0 coredns-7c65d6cfc9- kube-system 4ed9f975-b4fd-44f7-a88d-65b130cbc3e0 990 0 2025-08-13 00:22:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-c1c2bc5336 coredns-7c65d6cfc9-gm4v8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24c1ea57ef5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm4v8" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.534 [INFO][5259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm4v8" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.600 [INFO][5288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" HandleID="k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.600 [INFO][5288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" HandleID="k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e300), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-c1c2bc5336", "pod":"coredns-7c65d6cfc9-gm4v8", "timestamp":"2025-08-13 00:23:18.599987276 +0000 UTC"}, Hostname:"ci-4081.3.5-a-c1c2bc5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.600 [INFO][5288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.624 [INFO][5288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.624 [INFO][5288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-c1c2bc5336' Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.647 [INFO][5288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.672 [INFO][5288] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.681 [INFO][5288] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.687 [INFO][5288] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.692 [INFO][5288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.692 [INFO][5288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.697 [INFO][5288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5 Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.703 [INFO][5288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.718 [INFO][5288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.7/26] block=192.168.32.0/26 handle="k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.718 [INFO][5288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.7/26] handle="k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.719 [INFO][5288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:18.786991 containerd[1728]: 2025-08-13 00:23:18.719 [INFO][5288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.7/26] IPv6=[] ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" HandleID="k8s-pod-network.705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.789079 containerd[1728]: 2025-08-13 00:23:18.726 [INFO][5259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm4v8" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4ed9f975-b4fd-44f7-a88d-65b130cbc3e0", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"", Pod:"coredns-7c65d6cfc9-gm4v8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24c1ea57ef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:18.789079 containerd[1728]: 2025-08-13 00:23:18.727 [INFO][5259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.7/32] ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm4v8" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.789079 containerd[1728]: 2025-08-13 00:23:18.727 [INFO][5259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24c1ea57ef5 ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm4v8" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.789079 containerd[1728]: 2025-08-13 00:23:18.741 [INFO][5259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm4v8" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.789079 containerd[1728]: 2025-08-13 00:23:18.743 [INFO][5259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm4v8" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4ed9f975-b4fd-44f7-a88d-65b130cbc3e0", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5", Pod:"coredns-7c65d6cfc9-gm4v8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24c1ea57ef5", MAC:"da:80:f9:d1:4b:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:18.789079 containerd[1728]: 2025-08-13 00:23:18.783 [INFO][5259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gm4v8" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:18.850711 systemd-networkd[1586]: cali706ad445546: Link UP Aug 13 00:23:18.850994 systemd-networkd[1586]: cali706ad445546: Gained carrier Aug 13 00:23:18.857421 containerd[1728]: time="2025-08-13T00:23:18.857071494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ddbf5f64-sh7tp,Uid:eacb4a1e-81fe-44e2-8375-85476e370ebd,Namespace:calico-system,Attempt:1,} returns sandbox id \"5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a\"" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.597 [INFO][5273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0 coredns-7c65d6cfc9- kube-system 4056208b-d9c3-4786-99a2-567d10cf8d83 991 0 2025-08-13 00:22:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-a-c1c2bc5336 coredns-7c65d6cfc9-2x28l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali706ad445546 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2x28l" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.597 [INFO][5273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2x28l" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.724 [INFO][5296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" HandleID="k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.724 [INFO][5296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" HandleID="k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000275e20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-a-c1c2bc5336", "pod":"coredns-7c65d6cfc9-2x28l", "timestamp":"2025-08-13 00:23:18.724311317 +0000 UTC"}, Hostname:"ci-4081.3.5-a-c1c2bc5336", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.724 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.724 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.724 [INFO][5296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-a-c1c2bc5336' Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.753 [INFO][5296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.773 [INFO][5296] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.789 [INFO][5296] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.793 [INFO][5296] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.798 [INFO][5296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.799 [INFO][5296] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.802 [INFO][5296] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.814 [INFO][5296] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.832 [INFO][5296] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.8/26] block=192.168.32.0/26 handle="k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.832 [INFO][5296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.8/26] handle="k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" host="ci-4081.3.5-a-c1c2bc5336" Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.833 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:18.875438 containerd[1728]: 2025-08-13 00:23:18.835 [INFO][5296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.8/26] IPv6=[] ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" HandleID="k8s-pod-network.a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.876426 containerd[1728]: 2025-08-13 00:23:18.844 [INFO][5273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2x28l" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4056208b-d9c3-4786-99a2-567d10cf8d83", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"", Pod:"coredns-7c65d6cfc9-2x28l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali706ad445546", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:18.876426 containerd[1728]: 2025-08-13 00:23:18.845 [INFO][5273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.8/32] ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2x28l" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.876426 containerd[1728]: 2025-08-13 00:23:18.845 [INFO][5273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali706ad445546 ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2x28l" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.876426 containerd[1728]: 2025-08-13 00:23:18.851 [INFO][5273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2x28l" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.876426 containerd[1728]: 2025-08-13 00:23:18.852 [INFO][5273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2x28l" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4056208b-d9c3-4786-99a2-567d10cf8d83", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b", Pod:"coredns-7c65d6cfc9-2x28l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali706ad445546", MAC:"c2:9a:74:61:67:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:18.876426 containerd[1728]: 2025-08-13 00:23:18.869 [INFO][5273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2x28l" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:18.970357 containerd[1728]: time="2025-08-13T00:23:18.932197599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:18.970357 containerd[1728]: time="2025-08-13T00:23:18.932272600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:18.970357 containerd[1728]: time="2025-08-13T00:23:18.932291560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:18.970357 containerd[1728]: time="2025-08-13T00:23:18.932390880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:18.993792 systemd[1]: Started cri-containerd-705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5.scope - libcontainer container 705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5. Aug 13 00:23:19.024499 containerd[1728]: time="2025-08-13T00:23:19.024218617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gm4v8,Uid:4ed9f975-b4fd-44f7-a88d-65b130cbc3e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5\"" Aug 13 00:23:19.027916 containerd[1728]: time="2025-08-13T00:23:19.027538904Z" level=info msg="CreateContainer within sandbox \"705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:23:19.197879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441980542.mount: Deactivated successfully. Aug 13 00:23:19.197983 systemd[1]: run-netns-cni\x2d0903b6e1\x2d4635\x2d2c66\x2df592\x2d7465c4830f42.mount: Deactivated successfully. Aug 13 00:23:19.629318 containerd[1728]: time="2025-08-13T00:23:19.627867826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:23:19.629318 containerd[1728]: time="2025-08-13T00:23:19.628425507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:23:19.629318 containerd[1728]: time="2025-08-13T00:23:19.628439427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:19.629318 containerd[1728]: time="2025-08-13T00:23:19.628530067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:23:19.656836 systemd[1]: Started cri-containerd-a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b.scope - libcontainer container a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b. Aug 13 00:23:19.670622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3549525813.mount: Deactivated successfully. Aug 13 00:23:19.678877 containerd[1728]: time="2025-08-13T00:23:19.678797524Z" level=info msg="CreateContainer within sandbox \"705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa41ba3bc9193a654d90d7a5d3ba8ecefeaba997f9908e3668f35645b1ab8886\"" Aug 13 00:23:19.681306 containerd[1728]: time="2025-08-13T00:23:19.680105847Z" level=info msg="StartContainer for \"aa41ba3bc9193a654d90d7a5d3ba8ecefeaba997f9908e3668f35645b1ab8886\"" Aug 13 00:23:19.724051 systemd[1]: Started cri-containerd-aa41ba3bc9193a654d90d7a5d3ba8ecefeaba997f9908e3668f35645b1ab8886.scope - libcontainer container aa41ba3bc9193a654d90d7a5d3ba8ecefeaba997f9908e3668f35645b1ab8886. Aug 13 00:23:19.726427 containerd[1728]: time="2025-08-13T00:23:19.726378576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2x28l,Uid:4056208b-d9c3-4786-99a2-567d10cf8d83,Namespace:kube-system,Attempt:1,} returns sandbox id \"a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b\"" Aug 13 00:23:19.734432 containerd[1728]: time="2025-08-13T00:23:19.734304432Z" level=info msg="CreateContainer within sandbox \"a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:23:19.772882 containerd[1728]: time="2025-08-13T00:23:19.772686866Z" level=info msg="CreateContainer within sandbox \"a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"265dd8033f3243abd8c6ea226b3c450f83905cc7984731d02e8fa496089857cd\"" Aug 13 00:23:19.774953 containerd[1728]: time="2025-08-13T00:23:19.772693306Z" level=info msg="StartContainer for \"aa41ba3bc9193a654d90d7a5d3ba8ecefeaba997f9908e3668f35645b1ab8886\" returns successfully" Aug 13 00:23:19.775198 containerd[1728]: time="2025-08-13T00:23:19.774723430Z" level=info msg="StartContainer for \"265dd8033f3243abd8c6ea226b3c450f83905cc7984731d02e8fa496089857cd\"" Aug 13 00:23:19.814830 systemd[1]: Started cri-containerd-265dd8033f3243abd8c6ea226b3c450f83905cc7984731d02e8fa496089857cd.scope - libcontainer container 265dd8033f3243abd8c6ea226b3c450f83905cc7984731d02e8fa496089857cd. Aug 13 00:23:19.857294 containerd[1728]: time="2025-08-13T00:23:19.857240750Z" level=info msg="StartContainer for \"265dd8033f3243abd8c6ea226b3c450f83905cc7984731d02e8fa496089857cd\" returns successfully" Aug 13 00:23:19.981803 systemd-networkd[1586]: calide13efb0ea7: Gained IPv6LL Aug 13 00:23:19.982114 systemd-networkd[1586]: cali706ad445546: Gained IPv6LL Aug 13 00:23:20.302306 systemd-networkd[1586]: cali24c1ea57ef5: Gained IPv6LL Aug 13 00:23:20.380863 containerd[1728]: time="2025-08-13T00:23:20.380749922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:20.383345 containerd[1728]: time="2025-08-13T00:23:20.383308807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:23:20.389251 containerd[1728]: time="2025-08-13T00:23:20.387277975Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:20.405555 containerd[1728]: time="2025-08-13T00:23:20.405034569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:20.405809 kubelet[3195]: I0813 00:23:20.404868 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2x28l" podStartSLOduration=46.404846649 podStartE2EDuration="46.404846649s" podCreationTimestamp="2025-08-13 00:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:23:20.378429278 +0000 UTC m=+52.768324889" watchObservedRunningTime="2025-08-13 00:23:20.404846649 +0000 UTC m=+52.794742260" Aug 13 00:23:20.413991 containerd[1728]: time="2025-08-13T00:23:20.412503424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 5.596356311s" Aug 13 00:23:20.413991 containerd[1728]: time="2025-08-13T00:23:20.412559184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:23:20.416377 containerd[1728]: time="2025-08-13T00:23:20.416345591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:23:20.419490 containerd[1728]: time="2025-08-13T00:23:20.419448637Z" level=info msg="CreateContainer within sandbox \"44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:23:20.432416 kubelet[3195]: I0813 00:23:20.432357 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gm4v8" podStartSLOduration=46.432226422 podStartE2EDuration="46.432226422s" podCreationTimestamp="2025-08-13 00:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:23:20.432066862 +0000 UTC m=+52.821962473" watchObservedRunningTime="2025-08-13 00:23:20.432226422 +0000 UTC m=+52.822122073" Aug 13 00:23:20.456283 containerd[1728]: time="2025-08-13T00:23:20.455895948Z" level=info msg="CreateContainer within sandbox \"44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d736adc6ca48640d5eac420842b3015d2697ae554fc64b3109a5bf3e60d8c8b6\"" Aug 13 00:23:20.457367 containerd[1728]: time="2025-08-13T00:23:20.457241910Z" level=info msg="StartContainer for \"d736adc6ca48640d5eac420842b3015d2697ae554fc64b3109a5bf3e60d8c8b6\"" Aug 13 00:23:20.499886 systemd[1]: Started cri-containerd-d736adc6ca48640d5eac420842b3015d2697ae554fc64b3109a5bf3e60d8c8b6.scope - libcontainer container d736adc6ca48640d5eac420842b3015d2697ae554fc64b3109a5bf3e60d8c8b6. Aug 13 00:23:20.541155 containerd[1728]: time="2025-08-13T00:23:20.540222951Z" level=info msg="StartContainer for \"d736adc6ca48640d5eac420842b3015d2697ae554fc64b3109a5bf3e60d8c8b6\" returns successfully" Aug 13 00:23:21.395402 kubelet[3195]: I0813 00:23:21.394933 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-wvtl5" podStartSLOduration=23.613819613 podStartE2EDuration="29.394916362s" podCreationTimestamp="2025-08-13 00:22:52 +0000 UTC" firstStartedPulling="2025-08-13 00:23:14.634492761 +0000 UTC m=+47.024388332" lastFinishedPulling="2025-08-13 00:23:20.41558951 +0000 UTC m=+52.805485081" observedRunningTime="2025-08-13 00:23:21.39399252 +0000 UTC m=+53.783888171" watchObservedRunningTime="2025-08-13 00:23:21.394916362 +0000 UTC m=+53.784811973" Aug 13 00:23:22.358562 containerd[1728]: time="2025-08-13T00:23:22.358496823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:22.361371 containerd[1728]: time="2025-08-13T00:23:22.361096988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:23:22.363814 containerd[1728]: time="2025-08-13T00:23:22.363743234Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:22.367838 containerd[1728]: time="2025-08-13T00:23:22.367698801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:22.368475 containerd[1728]: time="2025-08-13T00:23:22.368336042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.95173477s" Aug 13 00:23:22.368475 containerd[1728]: time="2025-08-13T00:23:22.368375522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:23:22.369925 containerd[1728]: time="2025-08-13T00:23:22.369777125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:23:22.371613 containerd[1728]: time="2025-08-13T00:23:22.371364928Z" level=info msg="CreateContainer within sandbox \"ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:23:22.396152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964838892.mount: Deactivated successfully. Aug 13 00:23:22.409830 containerd[1728]: time="2025-08-13T00:23:22.407048837Z" level=info msg="CreateContainer within sandbox \"ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e215941a17954f3e71f38bd82f034940f4bb6ed71055b3a65e6155c1a2f1a167\"" Aug 13 00:23:22.414668 containerd[1728]: time="2025-08-13T00:23:22.413430130Z" level=info msg="StartContainer for \"e215941a17954f3e71f38bd82f034940f4bb6ed71055b3a65e6155c1a2f1a167\"" Aug 13 00:23:22.487152 systemd[1]: run-containerd-runc-k8s.io-e215941a17954f3e71f38bd82f034940f4bb6ed71055b3a65e6155c1a2f1a167-runc.cBrCug.mount: Deactivated successfully. Aug 13 00:23:22.496866 systemd[1]: Started cri-containerd-e215941a17954f3e71f38bd82f034940f4bb6ed71055b3a65e6155c1a2f1a167.scope - libcontainer container e215941a17954f3e71f38bd82f034940f4bb6ed71055b3a65e6155c1a2f1a167. Aug 13 00:23:22.536114 containerd[1728]: time="2025-08-13T00:23:22.536063326Z" level=info msg="StartContainer for \"e215941a17954f3e71f38bd82f034940f4bb6ed71055b3a65e6155c1a2f1a167\" returns successfully" Aug 13 00:23:22.830513 containerd[1728]: time="2025-08-13T00:23:22.830440695Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:22.835097 containerd[1728]: time="2025-08-13T00:23:22.834763304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:23:22.836865 containerd[1728]: time="2025-08-13T00:23:22.836821987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 467.012102ms" Aug 13 00:23:22.836930 containerd[1728]: time="2025-08-13T00:23:22.836865308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:23:22.838183 containerd[1728]: time="2025-08-13T00:23:22.838108470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:23:22.841536 containerd[1728]: time="2025-08-13T00:23:22.841478876Z" level=info msg="CreateContainer within sandbox \"e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:23:22.874670 containerd[1728]: time="2025-08-13T00:23:22.874152500Z" level=info msg="CreateContainer within sandbox \"e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f176d7d4130056f42d639860dc46b942218a33f3794bbe6855bd558e6a0fe77\"" Aug 13 00:23:22.875621 containerd[1728]: time="2025-08-13T00:23:22.875582142Z" level=info msg="StartContainer for \"2f176d7d4130056f42d639860dc46b942218a33f3794bbe6855bd558e6a0fe77\"" Aug 13 00:23:22.911558 systemd[1]: Started cri-containerd-2f176d7d4130056f42d639860dc46b942218a33f3794bbe6855bd558e6a0fe77.scope - libcontainer container 2f176d7d4130056f42d639860dc46b942218a33f3794bbe6855bd558e6a0fe77. Aug 13 00:23:22.956279 containerd[1728]: time="2025-08-13T00:23:22.954582695Z" level=info msg="StartContainer for \"2f176d7d4130056f42d639860dc46b942218a33f3794bbe6855bd558e6a0fe77\" returns successfully" Aug 13 00:23:23.448664 kubelet[3195]: I0813 00:23:23.448299 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b596cbbd-vbl26" podStartSLOduration=31.245229477 podStartE2EDuration="38.448279169s" podCreationTimestamp="2025-08-13 00:22:45 +0000 UTC" firstStartedPulling="2025-08-13 00:23:15.634736497 +0000 UTC m=+48.024632108" lastFinishedPulling="2025-08-13 00:23:22.837786229 +0000 UTC m=+55.227681800" observedRunningTime="2025-08-13 00:23:23.415589386 +0000 UTC m=+55.805484957" watchObservedRunningTime="2025-08-13 00:23:23.448279169 +0000 UTC m=+55.838174780" Aug 13 00:23:24.390132 kubelet[3195]: I0813 00:23:24.389976 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:23:24.530077 kubelet[3195]: I0813 00:23:24.529922 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b596cbbd-dc5df" podStartSLOduration=32.717866801 podStartE2EDuration="39.529879498s" podCreationTimestamp="2025-08-13 00:22:45 +0000 UTC" firstStartedPulling="2025-08-13 00:23:15.557476428 +0000 UTC m=+47.947372039" lastFinishedPulling="2025-08-13 00:23:22.369489125 +0000 UTC m=+54.759384736" observedRunningTime="2025-08-13 00:23:23.450890054 +0000 UTC m=+55.840785665" watchObservedRunningTime="2025-08-13 00:23:24.529879498 +0000 UTC m=+56.919775149" Aug 13 00:23:25.180011 containerd[1728]: time="2025-08-13T00:23:25.179948954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:25.182985 containerd[1728]: time="2025-08-13T00:23:25.182775800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:23:25.185450 containerd[1728]: time="2025-08-13T00:23:25.185376085Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:25.189165 containerd[1728]: time="2025-08-13T00:23:25.189106972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:25.190069 containerd[1728]: time="2025-08-13T00:23:25.189684653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.351496263s" Aug 13 00:23:25.190069 containerd[1728]: time="2025-08-13T00:23:25.189721933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:23:25.191720 containerd[1728]: time="2025-08-13T00:23:25.191461577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:23:25.193669 containerd[1728]: time="2025-08-13T00:23:25.193464060Z" level=info msg="CreateContainer within sandbox \"5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:23:25.225661 containerd[1728]: time="2025-08-13T00:23:25.225583802Z" level=info msg="CreateContainer within sandbox \"5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4d76ab06ff00a72c6f9a9a1c5b9b74907f6dad08236385a93c2a27b16f6d1cde\"" Aug 13 00:23:25.227222 containerd[1728]: time="2025-08-13T00:23:25.226536404Z" level=info msg="StartContainer for \"4d76ab06ff00a72c6f9a9a1c5b9b74907f6dad08236385a93c2a27b16f6d1cde\"" Aug 13 00:23:25.271875 systemd[1]: Started cri-containerd-4d76ab06ff00a72c6f9a9a1c5b9b74907f6dad08236385a93c2a27b16f6d1cde.scope - libcontainer container 4d76ab06ff00a72c6f9a9a1c5b9b74907f6dad08236385a93c2a27b16f6d1cde. Aug 13 00:23:25.307923 containerd[1728]: time="2025-08-13T00:23:25.307766321Z" level=info msg="StartContainer for \"4d76ab06ff00a72c6f9a9a1c5b9b74907f6dad08236385a93c2a27b16f6d1cde\" returns successfully" Aug 13 00:23:28.076128 containerd[1728]: time="2025-08-13T00:23:28.075836389Z" level=info msg="StopPodSandbox for \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\"" Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.202 [WARNING][5813] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4056208b-d9c3-4786-99a2-567d10cf8d83", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b", Pod:"coredns-7c65d6cfc9-2x28l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali706ad445546", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.202 [INFO][5813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.202 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" iface="eth0" netns="" Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.202 [INFO][5813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.202 [INFO][5813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.286 [INFO][5828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.286 [INFO][5828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.286 [INFO][5828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.316 [WARNING][5828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.317 [INFO][5828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.321 [INFO][5828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:28.338271 containerd[1728]: 2025-08-13 00:23:28.329 [INFO][5813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:28.340384 containerd[1728]: time="2025-08-13T00:23:28.338995497Z" level=info msg="TearDown network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\" successfully" Aug 13 00:23:28.340384 containerd[1728]: time="2025-08-13T00:23:28.339361338Z" level=info msg="StopPodSandbox for \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\" returns successfully" Aug 13 00:23:28.341348 containerd[1728]: time="2025-08-13T00:23:28.340417660Z" level=info msg="RemovePodSandbox for \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\"" Aug 13 00:23:28.341348 containerd[1728]: time="2025-08-13T00:23:28.340458420Z" level=info msg="Forcibly stopping sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\"" Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.409 [WARNING][5866] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4056208b-d9c3-4786-99a2-567d10cf8d83", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"a5d4a4867c6319c85ab362412b2698e8afbbebbd7a10cdc2f1026ffb18975b5b", Pod:"coredns-7c65d6cfc9-2x28l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali706ad445546", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.409 [INFO][5866] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.410 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" iface="eth0" netns="" Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.410 [INFO][5866] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.410 [INFO][5866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.454 [INFO][5879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.454 [INFO][5879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.454 [INFO][5879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.466 [WARNING][5879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.467 [INFO][5879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" HandleID="k8s-pod-network.61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--2x28l-eth0" Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.469 [INFO][5879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:28.475706 containerd[1728]: 2025-08-13 00:23:28.471 [INFO][5866] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9" Aug 13 00:23:28.475706 containerd[1728]: time="2025-08-13T00:23:28.474837119Z" level=info msg="TearDown network for sandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\" successfully" Aug 13 00:23:28.486070 containerd[1728]: time="2025-08-13T00:23:28.485987620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:28.487110 containerd[1728]: time="2025-08-13T00:23:28.487053782Z" level=info msg="RemovePodSandbox \"61e50ee7ec0ec4cea3409708007468c37ba6ec04d61d2af6da44e61a51927dc9\" returns successfully" Aug 13 00:23:28.489332 containerd[1728]: time="2025-08-13T00:23:28.489272747Z" level=info msg="StopPodSandbox for \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\"" Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.539 [WARNING][5893] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0", GenerateName:"calico-apiserver-7b596cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b596cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3", Pod:"calico-apiserver-7b596cbbd-vbl26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04d0a83608c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.540 [INFO][5893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.540 [INFO][5893] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" iface="eth0" netns="" Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.540 [INFO][5893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.540 [INFO][5893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.570 [INFO][5900] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.570 [INFO][5900] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.570 [INFO][5900] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.585 [WARNING][5900] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.585 [INFO][5900] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.588 [INFO][5900] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:28.591911 containerd[1728]: 2025-08-13 00:23:28.590 [INFO][5893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:28.591911 containerd[1728]: time="2025-08-13T00:23:28.591873984Z" level=info msg="TearDown network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\" successfully" Aug 13 00:23:28.593311 containerd[1728]: time="2025-08-13T00:23:28.592688026Z" level=info msg="StopPodSandbox for \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\" returns successfully" Aug 13 00:23:28.594614 containerd[1728]: time="2025-08-13T00:23:28.594142869Z" level=info msg="RemovePodSandbox for \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\"" Aug 13 00:23:28.594614 containerd[1728]: time="2025-08-13T00:23:28.594186549Z" level=info msg="Forcibly stopping sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\"" Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.643 [WARNING][5915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0", GenerateName:"calico-apiserver-7b596cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f3ad36d-19e4-4d1b-bbd6-393f7e34f68a", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b596cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"e85d0d3844ed9b023265e7a690d9e1b76c009eed452bee5aeffb4ad6996db2c3", Pod:"calico-apiserver-7b596cbbd-vbl26", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04d0a83608c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.643 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.644 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" iface="eth0" netns="" Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.644 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.644 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.677 [INFO][5922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.677 [INFO][5922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.677 [INFO][5922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.694 [WARNING][5922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.694 [INFO][5922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" HandleID="k8s-pod-network.5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--vbl26-eth0" Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.698 [INFO][5922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:28.705242 containerd[1728]: 2025-08-13 00:23:28.700 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818" Aug 13 00:23:28.706034 containerd[1728]: time="2025-08-13T00:23:28.705793164Z" level=info msg="TearDown network for sandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\" successfully" Aug 13 00:23:28.723282 containerd[1728]: time="2025-08-13T00:23:28.723095317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:28.723282 containerd[1728]: time="2025-08-13T00:23:28.723184957Z" level=info msg="RemovePodSandbox \"5127962929c13a516c8d556042b8ae5e18e0f65793110eb6c2961289a9991818\" returns successfully" Aug 13 00:23:28.724223 containerd[1728]: time="2025-08-13T00:23:28.723873839Z" level=info msg="StopPodSandbox for \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\"" Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.763 [WARNING][5936] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4ed9f975-b4fd-44f7-a88d-65b130cbc3e0", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5", Pod:"coredns-7c65d6cfc9-gm4v8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24c1ea57ef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.763 [INFO][5936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.763 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" iface="eth0" netns="" Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.763 [INFO][5936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.763 [INFO][5936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.785 [INFO][5943] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.785 [INFO][5943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.786 [INFO][5943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.796 [WARNING][5943] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.797 [INFO][5943] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.799 [INFO][5943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:28.805216 containerd[1728]: 2025-08-13 00:23:28.803 [INFO][5936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:28.805216 containerd[1728]: time="2025-08-13T00:23:28.805071475Z" level=info msg="TearDown network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\" successfully" Aug 13 00:23:28.805216 containerd[1728]: time="2025-08-13T00:23:28.805097315Z" level=info msg="StopPodSandbox for \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\" returns successfully" Aug 13 00:23:28.806286 containerd[1728]: time="2025-08-13T00:23:28.805905917Z" level=info msg="RemovePodSandbox for \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\"" Aug 13 00:23:28.806286 containerd[1728]: time="2025-08-13T00:23:28.805938397Z" level=info msg="Forcibly stopping sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\"" Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.849 [WARNING][5958] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4ed9f975-b4fd-44f7-a88d-65b130cbc3e0", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"705346fefada816669db609c8568f0b312197da08142359e992b4ecaa3229cf5", Pod:"coredns-7c65d6cfc9-gm4v8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24c1ea57ef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.850 [INFO][5958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.850 [INFO][5958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" iface="eth0" netns="" Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.850 [INFO][5958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.850 [INFO][5958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.873 [INFO][5966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.873 [INFO][5966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.873 [INFO][5966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.886 [WARNING][5966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.886 [INFO][5966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" HandleID="k8s-pod-network.63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-coredns--7c65d6cfc9--gm4v8-eth0" Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.888 [INFO][5966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:28.893680 containerd[1728]: 2025-08-13 00:23:28.890 [INFO][5958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5" Aug 13 00:23:28.895389 containerd[1728]: time="2025-08-13T00:23:28.895236209Z" level=info msg="TearDown network for sandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\" successfully" Aug 13 00:23:28.910759 containerd[1728]: time="2025-08-13T00:23:28.910560558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:28.910759 containerd[1728]: time="2025-08-13T00:23:28.910659719Z" level=info msg="RemovePodSandbox \"63556f4f5fa4b8b0c493219e0a9d545336e7ff5e1301a8f464da2a16125582d5\" returns successfully" Aug 13 00:23:28.911445 containerd[1728]: time="2025-08-13T00:23:28.911395000Z" level=info msg="StopPodSandbox for \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\"" Aug 13 00:23:28.950750 containerd[1728]: time="2025-08-13T00:23:28.950389155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:28.953213 containerd[1728]: time="2025-08-13T00:23:28.952589519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:23:28.955551 containerd[1728]: time="2025-08-13T00:23:28.955456605Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:28.961209 containerd[1728]: time="2025-08-13T00:23:28.961135376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:28.962395 containerd[1728]: time="2025-08-13T00:23:28.961837737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.77033924s" Aug 13 00:23:28.962395 containerd[1728]: time="2025-08-13T00:23:28.961900457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:23:28.966018 containerd[1728]: time="2025-08-13T00:23:28.965862985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:23:28.983328 containerd[1728]: time="2025-08-13T00:23:28.983289258Z" level=info msg="CreateContainer within sandbox \"5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:23:29.016657 containerd[1728]: time="2025-08-13T00:23:29.016587763Z" level=info msg="CreateContainer within sandbox \"5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"62fb2ef32b1cf00c6d7c144314f735d20b3531ca1687589bbe0e9b3bbb3edeb3\"" Aug 13 00:23:29.017261 containerd[1728]: time="2025-08-13T00:23:29.017137764Z" level=info msg="StartContainer for \"62fb2ef32b1cf00c6d7c144314f735d20b3531ca1687589bbe0e9b3bbb3edeb3\"" Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:28.958 [WARNING][5980] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"9cfaf342-f7c6-417f-a4c3-39fca511cded", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666", Pod:"goldmane-58fd7646b9-wvtl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41dc607c139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:28.958 [INFO][5980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:28.958 [INFO][5980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" iface="eth0" netns="" Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:28.958 [INFO][5980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:28.958 [INFO][5980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:28.999 [INFO][5989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:28.999 [INFO][5989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:28.999 [INFO][5989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:29.011 [WARNING][5989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:29.011 [INFO][5989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:29.013 [INFO][5989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.022752 containerd[1728]: 2025-08-13 00:23:29.018 [INFO][5980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:29.022752 containerd[1728]: time="2025-08-13T00:23:29.022295654Z" level=info msg="TearDown network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\" successfully" Aug 13 00:23:29.022752 containerd[1728]: time="2025-08-13T00:23:29.022323054Z" level=info msg="StopPodSandbox for \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\" returns successfully" Aug 13 00:23:29.026224 containerd[1728]: time="2025-08-13T00:23:29.025858940Z" level=info msg="RemovePodSandbox for \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\"" Aug 13 00:23:29.026224 containerd[1728]: time="2025-08-13T00:23:29.025905341Z" level=info msg="Forcibly stopping sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\"" Aug 13 00:23:29.056860 systemd[1]: Started cri-containerd-62fb2ef32b1cf00c6d7c144314f735d20b3531ca1687589bbe0e9b3bbb3edeb3.scope - libcontainer container 62fb2ef32b1cf00c6d7c144314f735d20b3531ca1687589bbe0e9b3bbb3edeb3. Aug 13 00:23:29.116670 containerd[1728]: time="2025-08-13T00:23:29.116012794Z" level=info msg="StartContainer for \"62fb2ef32b1cf00c6d7c144314f735d20b3531ca1687589bbe0e9b3bbb3edeb3\" returns successfully" Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.092 [WARNING][6015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"9cfaf342-f7c6-417f-a4c3-39fca511cded", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"44a3f59a4750d40d66eabd1157db4bc78dab19b30a5175b62d9abe76aee5a666", Pod:"goldmane-58fd7646b9-wvtl5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41dc607c139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.092 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.093 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" iface="eth0" netns="" Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.093 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.093 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.135 [INFO][6038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.135 [INFO][6038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.135 [INFO][6038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.146 [WARNING][6038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.146 [INFO][6038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" HandleID="k8s-pod-network.8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-goldmane--58fd7646b9--wvtl5-eth0" Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.147 [INFO][6038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.154782 containerd[1728]: 2025-08-13 00:23:29.151 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a" Aug 13 00:23:29.154782 containerd[1728]: time="2025-08-13T00:23:29.153664547Z" level=info msg="TearDown network for sandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\" successfully" Aug 13 00:23:29.282409 containerd[1728]: time="2025-08-13T00:23:29.282346555Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:29.282898 containerd[1728]: time="2025-08-13T00:23:29.282867276Z" level=info msg="RemovePodSandbox \"8b05fde2d6c0c5ab97b111bc8330a062fd59ca90953c1257cec5fb82ed648c7a\" returns successfully" Aug 13 00:23:29.284440 containerd[1728]: time="2025-08-13T00:23:29.284038518Z" level=info msg="StopPodSandbox for \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\"" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.341 [WARNING][6071] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.342 [INFO][6071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.342 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" iface="eth0" netns="" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.342 [INFO][6071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.342 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.369 [INFO][6078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.369 [INFO][6078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.369 [INFO][6078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.378 [WARNING][6078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.378 [INFO][6078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.382 [INFO][6078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.385532 containerd[1728]: 2025-08-13 00:23:29.384 [INFO][6071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:29.386262 containerd[1728]: time="2025-08-13T00:23:29.386006594Z" level=info msg="TearDown network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\" successfully" Aug 13 00:23:29.386262 containerd[1728]: time="2025-08-13T00:23:29.386034274Z" level=info msg="StopPodSandbox for \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\" returns successfully" Aug 13 00:23:29.386865 containerd[1728]: time="2025-08-13T00:23:29.386771356Z" level=info msg="RemovePodSandbox for \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\"" Aug 13 00:23:29.386865 containerd[1728]: time="2025-08-13T00:23:29.386806716Z" level=info msg="Forcibly stopping sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\"" Aug 13 00:23:29.449265 kubelet[3195]: I0813 00:23:29.448890 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76ddbf5f64-sh7tp" podStartSLOduration=26.345496795 podStartE2EDuration="36.448867756s" podCreationTimestamp="2025-08-13 00:22:53 +0000 UTC" firstStartedPulling="2025-08-13 00:23:18.859715739 +0000 UTC m=+51.249611350" lastFinishedPulling="2025-08-13 00:23:28.9630867 +0000 UTC m=+61.352982311" observedRunningTime="2025-08-13 00:23:29.446801512 +0000 UTC m=+61.836697123" watchObservedRunningTime="2025-08-13 00:23:29.448867756 +0000 UTC m=+61.838763367" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.430 [WARNING][6092] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" WorkloadEndpoint="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.430 [INFO][6092] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.431 [INFO][6092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" iface="eth0" netns="" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.431 [INFO][6092] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.431 [INFO][6092] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.475 [INFO][6099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.475 [INFO][6099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.475 [INFO][6099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.485 [WARNING][6099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.486 [INFO][6099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" HandleID="k8s-pod-network.e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-whisker--546fcc447--djg2f-eth0" Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.488 [INFO][6099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.494502 containerd[1728]: 2025-08-13 00:23:29.492 [INFO][6092] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f" Aug 13 00:23:29.495311 containerd[1728]: time="2025-08-13T00:23:29.494735164Z" level=info msg="TearDown network for sandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\" successfully" Aug 13 00:23:29.506032 containerd[1728]: time="2025-08-13T00:23:29.505931225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:29.506032 containerd[1728]: time="2025-08-13T00:23:29.506031786Z" level=info msg="RemovePodSandbox \"e40c3d1ea5274a410eb3eb327e9576c5ffda4b6da3eecfc8f40530efe8ab812f\" returns successfully" Aug 13 00:23:29.506680 containerd[1728]: time="2025-08-13T00:23:29.506630627Z" level=info msg="StopPodSandbox for \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\"" Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.551 [WARNING][6135] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0", GenerateName:"calico-apiserver-7b596cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef19092f-3405-456e-849a-6f31b80acace", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b596cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c", Pod:"calico-apiserver-7b596cbbd-dc5df", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e96a0cf314", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.551 [INFO][6135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.551 [INFO][6135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" iface="eth0" netns="" Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.551 [INFO][6135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.551 [INFO][6135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.573 [INFO][6142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.573 [INFO][6142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.573 [INFO][6142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.586 [WARNING][6142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.586 [INFO][6142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.588 [INFO][6142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.592730 containerd[1728]: 2025-08-13 00:23:29.590 [INFO][6135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:29.593906 containerd[1728]: time="2025-08-13T00:23:29.592754353Z" level=info msg="TearDown network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\" successfully" Aug 13 00:23:29.593906 containerd[1728]: time="2025-08-13T00:23:29.592780113Z" level=info msg="StopPodSandbox for \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\" returns successfully" Aug 13 00:23:29.593906 containerd[1728]: time="2025-08-13T00:23:29.593231154Z" level=info msg="RemovePodSandbox for \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\"" Aug 13 00:23:29.593906 containerd[1728]: time="2025-08-13T00:23:29.593261954Z" level=info msg="Forcibly stopping sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\"" Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.645 [WARNING][6156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0", GenerateName:"calico-apiserver-7b596cbbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef19092f-3405-456e-849a-6f31b80acace", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b596cbbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"ae5e0472ac998acce72a901489277558cdc2587166c55363b33b6a63c5b3a03c", Pod:"calico-apiserver-7b596cbbd-dc5df", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e96a0cf314", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.645 [INFO][6156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.646 [INFO][6156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" iface="eth0" netns="" Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.646 [INFO][6156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.646 [INFO][6156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.672 [INFO][6164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.672 [INFO][6164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.672 [INFO][6164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.684 [WARNING][6164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.684 [INFO][6164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" HandleID="k8s-pod-network.7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--apiserver--7b596cbbd--dc5df-eth0" Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.688 [INFO][6164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.693118 containerd[1728]: 2025-08-13 00:23:29.691 [INFO][6156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8" Aug 13 00:23:29.693546 containerd[1728]: time="2025-08-13T00:23:29.693209546Z" level=info msg="TearDown network for sandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\" successfully" Aug 13 00:23:29.708786 containerd[1728]: time="2025-08-13T00:23:29.707858695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:29.708786 containerd[1728]: time="2025-08-13T00:23:29.707955295Z" level=info msg="RemovePodSandbox \"7f2a79165c56b3443741ae3aad1014982b4260625b741af0964a7d68341c89b8\" returns successfully" Aug 13 00:23:29.708786 containerd[1728]: time="2025-08-13T00:23:29.708411656Z" level=info msg="StopPodSandbox for \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\"" Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.747 [WARNING][6178] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f75dc80d-3a70-4464-bb0d-78154b4f7aab", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803", Pod:"csi-node-driver-l6v4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f21a70f2c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.747 [INFO][6178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.747 [INFO][6178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" iface="eth0" netns="" Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.747 [INFO][6178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.747 [INFO][6178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.768 [INFO][6185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.769 [INFO][6185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.769 [INFO][6185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.778 [WARNING][6185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.778 [INFO][6185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.779 [INFO][6185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.782775 containerd[1728]: 2025-08-13 00:23:29.781 [INFO][6178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:29.783570 containerd[1728]: time="2025-08-13T00:23:29.783400960Z" level=info msg="TearDown network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\" successfully" Aug 13 00:23:29.783570 containerd[1728]: time="2025-08-13T00:23:29.783449520Z" level=info msg="StopPodSandbox for \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\" returns successfully" Aug 13 00:23:29.784341 containerd[1728]: time="2025-08-13T00:23:29.784305722Z" level=info msg="RemovePodSandbox for \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\"" Aug 13 00:23:29.784396 containerd[1728]: time="2025-08-13T00:23:29.784387082Z" level=info msg="Forcibly stopping sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\"" Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.833 [WARNING][6199] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f75dc80d-3a70-4464-bb0d-78154b4f7aab", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803", Pod:"csi-node-driver-l6v4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f21a70f2c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.837 [INFO][6199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.837 [INFO][6199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" iface="eth0" netns="" Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.837 [INFO][6199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.837 [INFO][6199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.858 [INFO][6206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.858 [INFO][6206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.858 [INFO][6206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.868 [WARNING][6206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.868 [INFO][6206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" HandleID="k8s-pod-network.c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-csi--node--driver--l6v4g-eth0" Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.870 [INFO][6206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.873011 containerd[1728]: 2025-08-13 00:23:29.871 [INFO][6199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966" Aug 13 00:23:29.873409 containerd[1728]: time="2025-08-13T00:23:29.873055733Z" level=info msg="TearDown network for sandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\" successfully" Aug 13 00:23:29.880918 containerd[1728]: time="2025-08-13T00:23:29.880848628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:29.881198 containerd[1728]: time="2025-08-13T00:23:29.880941428Z" level=info msg="RemovePodSandbox \"c00e25b9751238f404f3ab54c83af56ed52db8a53e4f3626503c42d92cd73966\" returns successfully" Aug 13 00:23:29.881453 containerd[1728]: time="2025-08-13T00:23:29.881425069Z" level=info msg="StopPodSandbox for \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\"" Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.914 [WARNING][6220] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0", GenerateName:"calico-kube-controllers-76ddbf5f64-", Namespace:"calico-system", SelfLink:"", UID:"eacb4a1e-81fe-44e2-8375-85476e370ebd", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ddbf5f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a", Pod:"calico-kube-controllers-76ddbf5f64-sh7tp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide13efb0ea7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.914 [INFO][6220] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.914 [INFO][6220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" iface="eth0" netns="" Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.914 [INFO][6220] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.915 [INFO][6220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.933 [INFO][6227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.933 [INFO][6227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.933 [INFO][6227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.942 [WARNING][6227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.942 [INFO][6227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.944 [INFO][6227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:29.946986 containerd[1728]: 2025-08-13 00:23:29.945 [INFO][6220] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:29.947397 containerd[1728]: time="2025-08-13T00:23:29.947050875Z" level=info msg="TearDown network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\" successfully" Aug 13 00:23:29.947397 containerd[1728]: time="2025-08-13T00:23:29.947090355Z" level=info msg="StopPodSandbox for \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\" returns successfully" Aug 13 00:23:29.947865 containerd[1728]: time="2025-08-13T00:23:29.947836717Z" level=info msg="RemovePodSandbox for \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\"" Aug 13 00:23:29.947908 containerd[1728]: time="2025-08-13T00:23:29.947893197Z" level=info msg="Forcibly stopping sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\"" Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:29.982 [WARNING][6241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0", GenerateName:"calico-kube-controllers-76ddbf5f64-", Namespace:"calico-system", SelfLink:"", UID:"eacb4a1e-81fe-44e2-8375-85476e370ebd", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ddbf5f64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-a-c1c2bc5336", ContainerID:"5ccb0474ba509693238cc9f0a0b3284f60093260dc9e2b3f92f210c48de58e2a", Pod:"calico-kube-controllers-76ddbf5f64-sh7tp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide13efb0ea7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:29.983 [INFO][6241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:29.983 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" iface="eth0" netns="" Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:29.983 [INFO][6241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:29.983 [INFO][6241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:30.018 [INFO][6248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:30.019 [INFO][6248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:30.019 [INFO][6248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:30.032 [WARNING][6248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:30.032 [INFO][6248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" HandleID="k8s-pod-network.be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Workload="ci--4081.3.5--a--c1c2bc5336-k8s-calico--kube--controllers--76ddbf5f64--sh7tp-eth0" Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:30.034 [INFO][6248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:23:30.037577 containerd[1728]: 2025-08-13 00:23:30.036 [INFO][6241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c" Aug 13 00:23:30.038021 containerd[1728]: time="2025-08-13T00:23:30.037625090Z" level=info msg="TearDown network for sandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\" successfully" Aug 13 00:23:30.099316 containerd[1728]: time="2025-08-13T00:23:30.099256529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:23:30.099458 containerd[1728]: time="2025-08-13T00:23:30.099363169Z" level=info msg="RemovePodSandbox \"be84dcc79786888f629e9186eee3235fbf81155ea4ef2496e9f3e1b6b9e0de8c\" returns successfully" Aug 13 00:23:30.265697 containerd[1728]: time="2025-08-13T00:23:30.265600849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.268033 containerd[1728]: time="2025-08-13T00:23:30.267887854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:23:30.271062 containerd[1728]: time="2025-08-13T00:23:30.270983940Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.276180 containerd[1728]: time="2025-08-13T00:23:30.276105509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:23:30.276933 containerd[1728]: time="2025-08-13T00:23:30.276780391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.310598966s" Aug 13 00:23:30.276933 containerd[1728]: time="2025-08-13T00:23:30.276819191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:23:30.280443 containerd[1728]: time="2025-08-13T00:23:30.280028637Z" level=info msg="CreateContainer within sandbox \"5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:23:30.300836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006550263.mount: Deactivated successfully. Aug 13 00:23:30.307525 containerd[1728]: time="2025-08-13T00:23:30.307479890Z" level=info msg="CreateContainer within sandbox \"5c122f701808fa838774f4757a7e45923ffc45627e1995d10ee37e3bb5636803\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"499c05ae8c3864d47372ca1fbf538bf8dd2a9457d2df741d71ece8b85f920999\"" Aug 13 00:23:30.308663 containerd[1728]: time="2025-08-13T00:23:30.308606732Z" level=info msg="StartContainer for \"499c05ae8c3864d47372ca1fbf538bf8dd2a9457d2df741d71ece8b85f920999\"" Aug 13 00:23:30.353860 systemd[1]: Started cri-containerd-499c05ae8c3864d47372ca1fbf538bf8dd2a9457d2df741d71ece8b85f920999.scope - libcontainer container 499c05ae8c3864d47372ca1fbf538bf8dd2a9457d2df741d71ece8b85f920999. Aug 13 00:23:30.387339 containerd[1728]: time="2025-08-13T00:23:30.387017603Z" level=info msg="StartContainer for \"499c05ae8c3864d47372ca1fbf538bf8dd2a9457d2df741d71ece8b85f920999\" returns successfully" Aug 13 00:23:30.465239 kubelet[3195]: I0813 00:23:30.465182 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l6v4g" podStartSLOduration=25.617569574 podStartE2EDuration="38.465164354s" podCreationTimestamp="2025-08-13 00:22:52 +0000 UTC" firstStartedPulling="2025-08-13 00:23:17.430446493 +0000 UTC m=+49.820342104" lastFinishedPulling="2025-08-13 00:23:30.278041273 +0000 UTC m=+62.667936884" observedRunningTime="2025-08-13 00:23:30.463570031 +0000 UTC m=+62.853465602" watchObservedRunningTime="2025-08-13 00:23:30.465164354 +0000 UTC m=+62.855059925" Aug 13 00:23:31.187057 kubelet[3195]: I0813 00:23:31.186970 3195 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:23:31.191389 kubelet[3195]: I0813 00:23:31.191050 3195 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:24:02.530925 kubelet[3195]: I0813 00:24:02.530144 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:24:02.768589 systemd[1]: run-containerd-runc-k8s.io-d736adc6ca48640d5eac420842b3015d2697ae554fc64b3109a5bf3e60d8c8b6-runc.O6QP0b.mount: Deactivated successfully. Aug 13 00:24:16.576398 systemd[1]: Started sshd@7-10.200.20.42:22-10.200.16.10:35988.service - OpenSSH per-connection server daemon (10.200.16.10:35988). Aug 13 00:24:17.066273 sshd[6463]: Accepted publickey for core from 10.200.16.10 port 35988 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:17.069094 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:17.074022 systemd-logind[1707]: New session 10 of user core. Aug 13 00:24:17.080851 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:24:17.532937 sshd[6463]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:17.536773 systemd[1]: sshd@7-10.200.20.42:22-10.200.16.10:35988.service: Deactivated successfully. Aug 13 00:24:17.539140 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:24:17.540158 systemd-logind[1707]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:24:17.541788 systemd-logind[1707]: Removed session 10. Aug 13 00:24:22.626979 systemd[1]: Started sshd@8-10.200.20.42:22-10.200.16.10:40168.service - OpenSSH per-connection server daemon (10.200.16.10:40168). Aug 13 00:24:23.100449 sshd[6477]: Accepted publickey for core from 10.200.16.10 port 40168 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:23.101919 sshd[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:23.106029 systemd-logind[1707]: New session 11 of user core. Aug 13 00:24:23.108806 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:24:23.525347 sshd[6477]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:23.529192 systemd[1]: sshd@8-10.200.20.42:22-10.200.16.10:40168.service: Deactivated successfully. Aug 13 00:24:23.531796 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:24:23.532652 systemd-logind[1707]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:24:23.533583 systemd-logind[1707]: Removed session 11. Aug 13 00:24:28.619993 systemd[1]: Started sshd@9-10.200.20.42:22-10.200.16.10:40178.service - OpenSSH per-connection server daemon (10.200.16.10:40178). Aug 13 00:24:29.111767 sshd[6516]: Accepted publickey for core from 10.200.16.10 port 40178 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:29.113239 sshd[6516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:29.117426 systemd-logind[1707]: New session 12 of user core. Aug 13 00:24:29.121824 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:24:29.534261 sshd[6516]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:29.537951 systemd-logind[1707]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:24:29.538412 systemd[1]: sshd@9-10.200.20.42:22-10.200.16.10:40178.service: Deactivated successfully. Aug 13 00:24:29.541261 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:24:29.543586 systemd-logind[1707]: Removed session 12. Aug 13 00:24:29.622906 systemd[1]: Started sshd@10-10.200.20.42:22-10.200.16.10:40192.service - OpenSSH per-connection server daemon (10.200.16.10:40192). Aug 13 00:24:30.095676 sshd[6530]: Accepted publickey for core from 10.200.16.10 port 40192 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:30.096583 sshd[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:30.102621 systemd-logind[1707]: New session 13 of user core. Aug 13 00:24:30.108862 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:24:30.553813 sshd[6530]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:30.557387 systemd[1]: sshd@10-10.200.20.42:22-10.200.16.10:40192.service: Deactivated successfully. Aug 13 00:24:30.560115 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:24:30.561613 systemd-logind[1707]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:24:30.563452 systemd-logind[1707]: Removed session 13. Aug 13 00:24:30.644943 systemd[1]: Started sshd@11-10.200.20.42:22-10.200.16.10:32836.service - OpenSSH per-connection server daemon (10.200.16.10:32836). Aug 13 00:24:31.130160 sshd[6541]: Accepted publickey for core from 10.200.16.10 port 32836 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:31.131022 sshd[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:31.135538 systemd-logind[1707]: New session 14 of user core. Aug 13 00:24:31.142850 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:24:31.589279 sshd[6541]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:31.593164 systemd[1]: sshd@11-10.200.20.42:22-10.200.16.10:32836.service: Deactivated successfully. Aug 13 00:24:31.597271 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:24:31.599704 systemd-logind[1707]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:24:31.601072 systemd-logind[1707]: Removed session 14. Aug 13 00:24:36.686031 systemd[1]: Started sshd@12-10.200.20.42:22-10.200.16.10:32852.service - OpenSSH per-connection server daemon (10.200.16.10:32852). Aug 13 00:24:37.172500 sshd[6626]: Accepted publickey for core from 10.200.16.10 port 32852 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:37.173484 sshd[6626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:37.177721 systemd-logind[1707]: New session 15 of user core. Aug 13 00:24:37.186885 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:24:37.587973 sshd[6626]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:37.591429 systemd[1]: sshd@12-10.200.20.42:22-10.200.16.10:32852.service: Deactivated successfully. Aug 13 00:24:37.594312 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:24:37.595415 systemd-logind[1707]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:24:37.596389 systemd-logind[1707]: Removed session 15. Aug 13 00:24:37.681990 systemd[1]: Started sshd@13-10.200.20.42:22-10.200.16.10:32856.service - OpenSSH per-connection server daemon (10.200.16.10:32856). Aug 13 00:24:38.171479 sshd[6638]: Accepted publickey for core from 10.200.16.10 port 32856 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:38.172700 sshd[6638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:38.176724 systemd-logind[1707]: New session 16 of user core. Aug 13 00:24:38.182809 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:24:38.750860 sshd[6638]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:38.754118 systemd-logind[1707]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:24:38.754697 systemd[1]: sshd@13-10.200.20.42:22-10.200.16.10:32856.service: Deactivated successfully. Aug 13 00:24:38.756481 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:24:38.757459 systemd-logind[1707]: Removed session 16. Aug 13 00:24:38.852903 systemd[1]: Started sshd@14-10.200.20.42:22-10.200.16.10:32866.service - OpenSSH per-connection server daemon (10.200.16.10:32866). Aug 13 00:24:39.337985 sshd[6649]: Accepted publickey for core from 10.200.16.10 port 32866 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:39.339395 sshd[6649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:39.343744 systemd-logind[1707]: New session 17 of user core. Aug 13 00:24:39.350806 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:24:41.405964 sshd[6649]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:41.410258 systemd[1]: sshd@14-10.200.20.42:22-10.200.16.10:32866.service: Deactivated successfully. Aug 13 00:24:41.410360 systemd-logind[1707]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:24:41.412562 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:24:41.414031 systemd-logind[1707]: Removed session 17. Aug 13 00:24:41.489624 systemd[1]: Started sshd@15-10.200.20.42:22-10.200.16.10:52232.service - OpenSSH per-connection server daemon (10.200.16.10:52232). Aug 13 00:24:41.948763 sshd[6670]: Accepted publickey for core from 10.200.16.10 port 52232 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:41.950505 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:41.954943 systemd-logind[1707]: New session 18 of user core. Aug 13 00:24:41.964862 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:24:42.468746 sshd[6670]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:42.472155 systemd[1]: sshd@15-10.200.20.42:22-10.200.16.10:52232.service: Deactivated successfully. Aug 13 00:24:42.475596 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:24:42.477541 systemd-logind[1707]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:24:42.480461 systemd-logind[1707]: Removed session 18. Aug 13 00:24:42.555252 systemd[1]: Started sshd@16-10.200.20.42:22-10.200.16.10:52240.service - OpenSSH per-connection server daemon (10.200.16.10:52240). Aug 13 00:24:43.031962 sshd[6681]: Accepted publickey for core from 10.200.16.10 port 52240 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:43.033408 sshd[6681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:43.038420 systemd-logind[1707]: New session 19 of user core. Aug 13 00:24:43.046884 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:24:43.454575 sshd[6681]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:43.459516 systemd[1]: sshd@16-10.200.20.42:22-10.200.16.10:52240.service: Deactivated successfully. Aug 13 00:24:43.463626 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:24:43.466468 systemd-logind[1707]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:24:43.467566 systemd-logind[1707]: Removed session 19. Aug 13 00:24:48.549909 systemd[1]: Started sshd@17-10.200.20.42:22-10.200.16.10:52250.service - OpenSSH per-connection server daemon (10.200.16.10:52250). Aug 13 00:24:49.043668 sshd[6718]: Accepted publickey for core from 10.200.16.10 port 52250 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:49.047007 sshd[6718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:49.053772 systemd-logind[1707]: New session 20 of user core. Aug 13 00:24:49.062860 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:24:49.466875 sshd[6718]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:49.472757 systemd[1]: sshd@17-10.200.20.42:22-10.200.16.10:52250.service: Deactivated successfully. Aug 13 00:24:49.477003 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:24:49.480311 systemd-logind[1707]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:24:49.482177 systemd-logind[1707]: Removed session 20. Aug 13 00:24:54.565199 systemd[1]: Started sshd@18-10.200.20.42:22-10.200.16.10:37908.service - OpenSSH per-connection server daemon (10.200.16.10:37908). Aug 13 00:24:55.042522 sshd[6752]: Accepted publickey for core from 10.200.16.10 port 37908 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:24:55.044341 sshd[6752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:55.049257 systemd-logind[1707]: New session 21 of user core. Aug 13 00:24:55.055812 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:24:55.468746 sshd[6752]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:55.472271 systemd[1]: sshd@18-10.200.20.42:22-10.200.16.10:37908.service: Deactivated successfully. Aug 13 00:24:55.474475 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:24:55.475367 systemd-logind[1707]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:24:55.476518 systemd-logind[1707]: Removed session 21. Aug 13 00:25:00.556820 systemd[1]: Started sshd@19-10.200.20.42:22-10.200.16.10:36082.service - OpenSSH per-connection server daemon (10.200.16.10:36082). Aug 13 00:25:01.046018 sshd[6785]: Accepted publickey for core from 10.200.16.10 port 36082 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:01.047622 sshd[6785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:01.052603 systemd-logind[1707]: New session 22 of user core. Aug 13 00:25:01.057812 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:25:01.513908 sshd[6785]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:01.517810 systemd[1]: sshd@19-10.200.20.42:22-10.200.16.10:36082.service: Deactivated successfully. Aug 13 00:25:01.520885 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:25:01.523484 systemd-logind[1707]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:25:01.524688 systemd-logind[1707]: Removed session 22. Aug 13 00:25:06.600078 systemd[1]: Started sshd@20-10.200.20.42:22-10.200.16.10:36090.service - OpenSSH per-connection server daemon (10.200.16.10:36090). Aug 13 00:25:07.084320 sshd[6839]: Accepted publickey for core from 10.200.16.10 port 36090 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:07.085954 sshd[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:07.090256 systemd-logind[1707]: New session 23 of user core. Aug 13 00:25:07.095797 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:25:07.499540 sshd[6839]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:07.504243 systemd-logind[1707]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:25:07.504978 systemd[1]: sshd@20-10.200.20.42:22-10.200.16.10:36090.service: Deactivated successfully. Aug 13 00:25:07.509070 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:25:07.510294 systemd-logind[1707]: Removed session 23. Aug 13 00:25:12.585553 systemd[1]: Started sshd@21-10.200.20.42:22-10.200.16.10:56106.service - OpenSSH per-connection server daemon (10.200.16.10:56106). Aug 13 00:25:13.059939 sshd[6852]: Accepted publickey for core from 10.200.16.10 port 56106 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:13.061287 sshd[6852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:13.065134 systemd-logind[1707]: New session 24 of user core. Aug 13 00:25:13.070807 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:25:13.496887 sshd[6852]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:13.500093 systemd-logind[1707]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:25:13.501101 systemd[1]: sshd@21-10.200.20.42:22-10.200.16.10:56106.service: Deactivated successfully. Aug 13 00:25:13.503584 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:25:13.505384 systemd-logind[1707]: Removed session 24. Aug 13 00:25:18.591918 systemd[1]: Started sshd@22-10.200.20.42:22-10.200.16.10:56110.service - OpenSSH per-connection server daemon (10.200.16.10:56110). Aug 13 00:25:19.085201 sshd[6865]: Accepted publickey for core from 10.200.16.10 port 56110 ssh2: RSA SHA256:zpa1ROX3CM+oLD/DkzMHgHkTwxVz2NjO3773yvsmOdI Aug 13 00:25:19.087034 sshd[6865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:19.092708 systemd-logind[1707]: New session 25 of user core. Aug 13 00:25:19.097816 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:25:19.495132 sshd[6865]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:19.498803 systemd[1]: sshd@22-10.200.20.42:22-10.200.16.10:56110.service: Deactivated successfully. Aug 13 00:25:19.501649 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:25:19.502991 systemd-logind[1707]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:25:19.504177 systemd-logind[1707]: Removed session 25.