Jul 12 00:05:33.361970 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:05:33.361993 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:05:33.362001 kernel: KASLR enabled Jul 12 00:05:33.362006 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 12 00:05:33.362014 kernel: printk: bootconsole [pl11] enabled Jul 12 00:05:33.362019 kernel: efi: EFI v2.7 by EDK II Jul 12 00:05:33.362026 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 12 00:05:33.362033 kernel: random: crng init done Jul 12 00:05:33.362038 kernel: ACPI: Early table checksum verification disabled Jul 12 00:05:33.362044 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 12 00:05:33.362050 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362056 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362063 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 12 00:05:33.362070 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362077 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362083 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362090 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362098 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362104 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362110 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 12 00:05:33.362117 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362123 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 12 00:05:33.362129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 12 00:05:33.362136 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 12 00:05:33.362142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 12 00:05:33.362148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 12 00:05:33.362155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 12 00:05:33.362161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 12 00:05:33.362169 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 12 00:05:33.362175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 12 00:05:33.362182 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 12 00:05:33.362188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 12 00:05:33.362194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 12 00:05:33.362201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 12 00:05:33.362207 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 12 00:05:33.362213 kernel: Zone ranges: Jul 12 00:05:33.362219 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 12 00:05:33.362225 kernel: DMA32 empty Jul 12 00:05:33.362232 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:05:33.362238 kernel: Movable zone start for each node Jul 12 00:05:33.364303 kernel: Early memory node ranges Jul 12 00:05:33.364315 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 12 00:05:33.364323 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 12 00:05:33.364330 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 12 00:05:33.364337 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 12 00:05:33.364345 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 12 00:05:33.364352 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 12 00:05:33.364359 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:05:33.364366 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 12 00:05:33.364373 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 12 00:05:33.364380 kernel: psci: probing for conduit method from ACPI. Jul 12 00:05:33.364386 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:05:33.364393 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:05:33.364399 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 12 00:05:33.364406 kernel: psci: SMC Calling Convention v1.4 Jul 12 00:05:33.364412 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 12 00:05:33.364419 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 12 00:05:33.364427 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:05:33.364434 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:05:33.364441 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:05:33.364448 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:05:33.364454 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:05:33.364461 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:05:33.364468 kernel: CPU features: detected: Spectre-BHB Jul 12 00:05:33.364475 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:05:33.364481 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:05:33.364488 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:05:33.364495 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 12 00:05:33.364503 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:05:33.364510 kernel: alternatives: applying boot alternatives Jul 12 00:05:33.364518 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:33.364525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:05:33.364532 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:05:33.364539 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:05:33.364545 kernel: Fallback order for Node 0: 0 Jul 12 00:05:33.364552 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 12 00:05:33.364559 kernel: Policy zone: Normal Jul 12 00:05:33.364565 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:05:33.364572 kernel: software IO TLB: area num 2. Jul 12 00:05:33.364580 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 12 00:05:33.364587 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 12 00:05:33.364594 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:05:33.364601 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:05:33.364608 kernel: rcu: RCU event tracing is enabled. Jul 12 00:05:33.364615 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:05:33.364622 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:05:33.364629 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:05:33.364636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:05:33.364643 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:05:33.364649 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:05:33.364658 kernel: GICv3: 960 SPIs implemented Jul 12 00:05:33.364664 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:05:33.364671 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:05:33.364677 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:05:33.364684 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 12 00:05:33.364691 kernel: ITS: No ITS available, not enabling LPIs Jul 12 00:05:33.364698 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:05:33.364704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:33.364711 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:05:33.364718 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:05:33.364725 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:05:33.364733 kernel: Console: colour dummy device 80x25 Jul 12 00:05:33.364741 kernel: printk: console [tty1] enabled Jul 12 00:05:33.364748 kernel: ACPI: Core revision 20230628 Jul 12 00:05:33.364755 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:05:33.364762 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:05:33.364768 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:05:33.364775 kernel: landlock: Up and running. Jul 12 00:05:33.364782 kernel: SELinux: Initializing. Jul 12 00:05:33.364789 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:33.364796 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:33.364805 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:33.364812 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:33.364819 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 12 00:05:33.364826 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 12 00:05:33.364833 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 12 00:05:33.364840 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:05:33.364847 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:05:33.364860 kernel: Remapping and enabling EFI services. Jul 12 00:05:33.364867 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:05:33.364874 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:05:33.364881 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 12 00:05:33.364890 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:33.364897 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:05:33.364904 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:05:33.364912 kernel: SMP: Total of 2 processors activated. Jul 12 00:05:33.364919 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:05:33.364927 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 12 00:05:33.364935 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:05:33.364942 kernel: CPU features: detected: CRC32 instructions Jul 12 00:05:33.364950 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:05:33.364957 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:05:33.364964 kernel: CPU features: detected: Privileged Access Never Jul 12 00:05:33.364971 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:05:33.364978 kernel: alternatives: applying system-wide alternatives Jul 12 00:05:33.364985 kernel: devtmpfs: initialized Jul 12 00:05:33.364994 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:05:33.365002 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:05:33.365009 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:05:33.365016 kernel: SMBIOS 3.1.0 present. Jul 12 00:05:33.365024 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 12 00:05:33.365031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:05:33.365039 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:05:33.365046 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:05:33.365053 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:05:33.365062 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:05:33.365069 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 12 00:05:33.365076 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:05:33.365084 kernel: cpuidle: using governor menu Jul 12 00:05:33.365091 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:05:33.365098 kernel: ASID allocator initialised with 32768 entries Jul 12 00:05:33.365105 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:05:33.365112 kernel: Serial: AMBA PL011 UART driver Jul 12 00:05:33.365119 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:05:33.365128 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:05:33.365135 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:05:33.365142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:05:33.365149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:05:33.365157 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:05:33.365164 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:05:33.365171 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:05:33.365179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:05:33.365186 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:05:33.365194 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:05:33.365201 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:05:33.365209 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:05:33.365216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:05:33.365223 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:05:33.365231 kernel: ACPI: Interpreter enabled Jul 12 00:05:33.365238 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:05:33.365254 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:05:33.365265 kernel: printk: console [ttyAMA0] enabled Jul 12 00:05:33.365276 kernel: printk: bootconsole [pl11] disabled Jul 12 00:05:33.365283 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 12 00:05:33.365290 kernel: iommu: Default domain type: Translated Jul 12 00:05:33.365297 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:05:33.365305 kernel: efivars: Registered efivars operations Jul 12 00:05:33.365312 kernel: vgaarb: loaded Jul 12 00:05:33.365319 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:05:33.365326 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:05:33.365333 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:05:33.365342 kernel: pnp: PnP ACPI init Jul 12 00:05:33.365350 kernel: pnp: PnP ACPI: found 0 devices Jul 12 00:05:33.365357 kernel: NET: Registered PF_INET protocol family Jul 12 00:05:33.365364 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:05:33.365371 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:05:33.365379 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:05:33.365386 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:05:33.365393 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:05:33.365401 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:05:33.365409 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:33.365417 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:33.365424 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:05:33.365431 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:05:33.365438 kernel: kvm [1]: HYP mode not available Jul 12 00:05:33.365445 kernel: Initialise system trusted keyrings Jul 12 00:05:33.365453 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:05:33.365460 kernel: Key type asymmetric registered Jul 12 00:05:33.365467 kernel: Asymmetric key parser 'x509' registered Jul 12 00:05:33.365475 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:05:33.365483 kernel: io scheduler mq-deadline registered Jul 12 00:05:33.365490 kernel: io scheduler kyber registered Jul 12 00:05:33.365497 kernel: io scheduler bfq registered Jul 12 00:05:33.365504 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:05:33.365512 kernel: thunder_xcv, ver 1.0 Jul 12 00:05:33.365519 kernel: thunder_bgx, ver 1.0 Jul 12 00:05:33.365526 kernel: nicpf, ver 1.0 Jul 12 00:05:33.365533 kernel: nicvf, ver 1.0 Jul 12 00:05:33.365673 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:05:33.365745 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:05:32 UTC (1752278732) Jul 12 00:05:33.365756 kernel: efifb: probing for efifb Jul 12 00:05:33.365763 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 12 00:05:33.365771 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 12 00:05:33.365778 kernel: efifb: scrolling: redraw Jul 12 00:05:33.365785 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 00:05:33.365792 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:05:33.365801 kernel: fb0: EFI VGA frame buffer device Jul 12 00:05:33.365808 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 12 00:05:33.365816 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:05:33.365823 kernel: No ACPI PMU IRQ for CPU0 Jul 12 00:05:33.365830 kernel: No ACPI PMU IRQ for CPU1 Jul 12 00:05:33.365837 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 12 00:05:33.365844 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:05:33.365852 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:05:33.365859 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:05:33.365867 kernel: Segment Routing with IPv6 Jul 12 00:05:33.365874 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:05:33.365882 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:05:33.365889 kernel: Key type dns_resolver registered Jul 12 00:05:33.365896 kernel: registered taskstats version 1 Jul 12 00:05:33.365903 kernel: Loading compiled-in X.509 certificates Jul 12 00:05:33.365911 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:05:33.365918 kernel: Key type .fscrypt registered Jul 12 00:05:33.365925 kernel: Key type fscrypt-provisioning registered Jul 12 00:05:33.365933 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:05:33.365941 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:05:33.365948 kernel: ima: No architecture policies found Jul 12 00:05:33.365956 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:05:33.365963 kernel: clk: Disabling unused clocks Jul 12 00:05:33.365970 kernel: Freeing unused kernel memory: 39424K Jul 12 00:05:33.365977 kernel: Run /init as init process Jul 12 00:05:33.365984 kernel: with arguments: Jul 12 00:05:33.365991 kernel: /init Jul 12 00:05:33.366000 kernel: with environment: Jul 12 00:05:33.366007 kernel: HOME=/ Jul 12 00:05:33.366014 kernel: TERM=linux Jul 12 00:05:33.366021 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:05:33.366031 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:05:33.366040 systemd[1]: Detected virtualization microsoft. Jul 12 00:05:33.366048 systemd[1]: Detected architecture arm64. Jul 12 00:05:33.366055 systemd[1]: Running in initrd. Jul 12 00:05:33.366065 systemd[1]: No hostname configured, using default hostname. Jul 12 00:05:33.366072 systemd[1]: Hostname set to . Jul 12 00:05:33.366080 systemd[1]: Initializing machine ID from random generator. Jul 12 00:05:33.366088 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:05:33.366096 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:33.366103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:33.366112 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:05:33.366120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:05:33.366129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:05:33.366137 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:05:33.366146 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:05:33.366154 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:05:33.366162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:33.366170 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:33.366180 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:05:33.366187 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:05:33.366195 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:05:33.366203 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:05:33.366211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:33.366218 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:33.366226 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:05:33.366234 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:05:33.366242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:33.372426 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:33.372436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:33.372444 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:05:33.372452 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:05:33.372460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:05:33.372468 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:05:33.372475 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:05:33.372483 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:05:33.372491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:05:33.372525 systemd-journald[217]: Collecting audit messages is disabled. Jul 12 00:05:33.372545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:33.372559 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:05:33.372571 systemd-journald[217]: Journal started Jul 12 00:05:33.372590 systemd-journald[217]: Runtime Journal (/run/log/journal/b89b6c0009c84ffe976fa3961561d88e) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:05:33.388290 kernel: Bridge firewalling registered Jul 12 00:05:33.361346 systemd-modules-load[218]: Inserted module 'overlay' Jul 12 00:05:33.387667 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 12 00:05:33.411576 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:05:33.412486 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:33.427062 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:33.435651 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:05:33.447217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:33.457666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:33.477482 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:33.492428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:05:33.511725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:05:33.525453 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:05:33.538311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:33.561173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:33.570580 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:05:33.583793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:33.612670 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:05:33.626868 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:05:33.641365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:05:33.663900 dracut-cmdline[252]: dracut-dracut-053 Jul 12 00:05:33.680491 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:33.671115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:33.674224 systemd-resolved[257]: Positive Trust Anchors: Jul 12 00:05:33.674234 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:05:33.674284 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:05:33.676424 systemd-resolved[257]: Defaulting to hostname 'linux'. Jul 12 00:05:33.682114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:05:33.723176 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:33.812265 kernel: SCSI subsystem initialized Jul 12 00:05:33.820260 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:05:33.831412 kernel: iscsi: registered transport (tcp) Jul 12 00:05:33.850416 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:05:33.850433 kernel: QLogic iSCSI HBA Driver Jul 12 00:05:33.889659 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:33.904451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:05:33.937974 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:05:33.938040 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:05:33.944681 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:05:33.993272 kernel: raid6: neonx8 gen() 15758 MB/s Jul 12 00:05:34.013258 kernel: raid6: neonx4 gen() 15670 MB/s Jul 12 00:05:34.033255 kernel: raid6: neonx2 gen() 13236 MB/s Jul 12 00:05:34.054256 kernel: raid6: neonx1 gen() 10480 MB/s Jul 12 00:05:34.074259 kernel: raid6: int64x8 gen() 6960 MB/s Jul 12 00:05:34.094254 kernel: raid6: int64x4 gen() 7353 MB/s Jul 12 00:05:34.115256 kernel: raid6: int64x2 gen() 6133 MB/s Jul 12 00:05:34.139446 kernel: raid6: int64x1 gen() 5061 MB/s Jul 12 00:05:34.139457 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Jul 12 00:05:34.164186 kernel: raid6: .... xor() 11957 MB/s, rmw enabled Jul 12 00:05:34.164207 kernel: raid6: using neon recovery algorithm Jul 12 00:05:34.176937 kernel: xor: measuring software checksum speed Jul 12 00:05:34.176955 kernel: 8regs : 19693 MB/sec Jul 12 00:05:34.184203 kernel: 32regs : 18628 MB/sec Jul 12 00:05:34.184214 kernel: arm64_neon : 27114 MB/sec Jul 12 00:05:34.188643 kernel: xor: using function: arm64_neon (27114 MB/sec) Jul 12 00:05:34.239262 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:05:34.250290 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:34.266394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:34.290583 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jul 12 00:05:34.296996 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:34.318447 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:05:34.331268 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Jul 12 00:05:34.356441 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:34.375518 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:05:34.410237 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:34.436865 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:05:34.466681 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:34.480790 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:34.496126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:34.510391 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:05:34.528265 kernel: hv_vmbus: Vmbus version:5.3 Jul 12 00:05:34.532448 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:05:34.725723 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 12 00:05:34.725746 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:05:34.725756 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:05:34.725774 kernel: PTP clock support registered Jul 12 00:05:34.725784 kernel: hv_vmbus: registering driver hid_hyperv Jul 12 00:05:34.725793 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 12 00:05:34.725803 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 12 00:05:34.725812 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 12 00:05:34.725960 kernel: hv_utils: Registering HyperV Utility Driver Jul 12 00:05:34.725970 kernel: hv_vmbus: registering driver hv_utils Jul 12 00:05:34.725979 kernel: hv_utils: Heartbeat IC version 3.0 Jul 12 00:05:34.725991 kernel: hv_utils: Shutdown IC version 3.2 Jul 12 00:05:34.726000 kernel: hv_utils: TimeSync IC version 4.0 Jul 12 00:05:34.726009 kernel: hv_vmbus: registering driver hv_storvsc Jul 12 00:05:34.726018 kernel: hv_vmbus: registering driver hv_netvsc Jul 12 00:05:34.726027 kernel: scsi host1: storvsc_host_t Jul 12 00:05:34.726142 kernel: scsi host0: storvsc_host_t Jul 12 00:05:34.726223 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 12 00:05:34.726247 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 12 00:05:34.624145 systemd-resolved[257]: Clock change detected. Flushing caches. Jul 12 00:05:34.721127 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:34.768929 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: VF slot 1 added Jul 12 00:05:34.770240 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 12 00:05:34.770354 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:05:34.737640 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:34.786303 kernel: hv_vmbus: registering driver hv_pci Jul 12 00:05:34.737770 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:34.808199 kernel: hv_pci 14e00f9b-9828-48da-9a76-49d110ab5156: PCI VMBus probing: Using version 0x10004 Jul 12 00:05:34.808359 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 12 00:05:34.768670 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:34.838738 kernel: hv_pci 14e00f9b-9828-48da-9a76-49d110ab5156: PCI host bridge to bus 9828:00 Jul 12 00:05:34.838901 kernel: pci_bus 9828:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 12 00:05:34.839000 kernel: pci_bus 9828:00: No busn resource found for root bus, will use [bus 00-ff] Jul 12 00:05:34.776995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:34.777179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:34.863934 kernel: pci 9828:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 12 00:05:34.863974 kernel: pci 9828:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:05:34.795186 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:34.930183 kernel: pci 9828:00:02.0: enabling Extended Tags Jul 12 00:05:34.930358 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 12 00:05:34.930466 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 12 00:05:34.930549 kernel: pci 9828:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9828:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 12 00:05:34.930632 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 12 00:05:34.930735 kernel: pci_bus 9828:00: busn_res: [bus 00-ff] end is updated to 00 Jul 12 00:05:34.930820 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 12 00:05:34.930902 kernel: pci 9828:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:05:34.930985 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 12 00:05:34.852793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:34.950476 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:34.893916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:34.960539 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 12 00:05:34.961253 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:34.995419 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:35.025081 kernel: mlx5_core 9828:00:02.0: enabling device (0000 -> 0002) Jul 12 00:05:35.032107 kernel: mlx5_core 9828:00:02.0: firmware version: 16.31.2424 Jul 12 00:05:35.311619 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: VF registering: eth1 Jul 12 00:05:35.311825 kernel: mlx5_core 9828:00:02.0 eth1: joined to eth0 Jul 12 00:05:35.321182 kernel: mlx5_core 9828:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 12 00:05:35.336133 kernel: mlx5_core 9828:00:02.0 enP38952s1: renamed from eth1 Jul 12 00:05:35.507114 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (502) Jul 12 00:05:35.515267 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 12 00:05:35.528193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:05:35.571205 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (490) Jul 12 00:05:35.577829 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 12 00:05:35.596472 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 12 00:05:35.603487 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 12 00:05:35.628319 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:05:35.652109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:35.659105 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:36.668977 disk-uuid[601]: The operation has completed successfully. Jul 12 00:05:36.674263 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:36.724079 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:05:36.724195 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:05:36.752231 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:05:36.766055 sh[714]: Success Jul 12 00:05:36.795344 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:05:36.978570 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:05:36.984578 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:05:36.999249 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:05:37.030941 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:05:37.030994 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:37.038096 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:05:37.043540 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:05:37.048893 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:05:37.295737 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:05:37.301459 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:05:37.318418 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:05:37.324245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:05:37.363008 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:37.363033 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:37.367628 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:37.410119 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:37.425436 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:05:37.431123 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:37.439277 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:05:37.445390 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:05:37.466556 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:05:37.479309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:05:37.510690 systemd-networkd[898]: lo: Link UP Jul 12 00:05:37.510704 systemd-networkd[898]: lo: Gained carrier Jul 12 00:05:37.512318 systemd-networkd[898]: Enumeration completed Jul 12 00:05:37.512435 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:05:37.521720 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:37.521724 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:05:37.522727 systemd[1]: Reached target network.target - Network. Jul 12 00:05:37.615109 kernel: mlx5_core 9828:00:02.0 enP38952s1: Link up Jul 12 00:05:37.694112 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: Data path switched to VF: enP38952s1 Jul 12 00:05:37.694743 systemd-networkd[898]: enP38952s1: Link UP Jul 12 00:05:37.694979 systemd-networkd[898]: eth0: Link UP Jul 12 00:05:37.695413 systemd-networkd[898]: eth0: Gained carrier Jul 12 00:05:37.695422 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:37.703597 systemd-networkd[898]: enP38952s1: Gained carrier Jul 12 00:05:37.730147 systemd-networkd[898]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:05:38.216170 ignition[897]: Ignition 2.19.0 Jul 12 00:05:38.216181 ignition[897]: Stage: fetch-offline Jul 12 00:05:38.220851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:05:38.216216 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:38.216224 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:38.216311 ignition[897]: parsed url from cmdline: "" Jul 12 00:05:38.216314 ignition[897]: no config URL provided Jul 12 00:05:38.216318 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:05:38.248420 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:05:38.216324 ignition[897]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:05:38.216329 ignition[897]: failed to fetch config: resource requires networking Jul 12 00:05:38.216567 ignition[897]: Ignition finished successfully Jul 12 00:05:38.272492 ignition[907]: Ignition 2.19.0 Jul 12 00:05:38.272499 ignition[907]: Stage: fetch Jul 12 00:05:38.272792 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:38.272804 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:38.272928 ignition[907]: parsed url from cmdline: "" Jul 12 00:05:38.272932 ignition[907]: no config URL provided Jul 12 00:05:38.272937 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:05:38.272945 ignition[907]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:05:38.272977 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 12 00:05:38.423639 ignition[907]: GET result: OK Jul 12 00:05:38.423748 ignition[907]: config has been read from IMDS userdata Jul 12 00:05:38.423831 ignition[907]: parsing config with SHA512: 1dd38187578391c8e9e438aa2855f1af9e725013ec2bc256105808904f8621ae2416a143bfc2080881bae91fafbaf2ffe290a639ff30dcc1c0ac588c02d20d86 Jul 12 00:05:38.428027 unknown[907]: fetched base config from "system" Jul 12 00:05:38.428500 ignition[907]: fetch: fetch complete Jul 12 00:05:38.428034 unknown[907]: fetched base config from "system" Jul 12 00:05:38.428505 ignition[907]: fetch: fetch passed Jul 12 00:05:38.428039 unknown[907]: fetched user config from "azure" Jul 12 00:05:38.428559 ignition[907]: Ignition finished successfully Jul 12 00:05:38.433906 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:05:38.461234 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:05:38.477571 ignition[913]: Ignition 2.19.0 Jul 12 00:05:38.477580 ignition[913]: Stage: kargs Jul 12 00:05:38.484638 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:05:38.477798 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:38.477807 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:38.479319 ignition[913]: kargs: kargs passed Jul 12 00:05:38.479367 ignition[913]: Ignition finished successfully Jul 12 00:05:38.511317 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:05:38.531646 ignition[919]: Ignition 2.19.0 Jul 12 00:05:38.531658 ignition[919]: Stage: disks Jul 12 00:05:38.534945 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:05:38.531832 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:38.541533 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:05:38.531841 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:38.548247 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:05:38.532757 ignition[919]: disks: disks passed Jul 12 00:05:38.559936 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:05:38.532800 ignition[919]: Ignition finished successfully Jul 12 00:05:38.570997 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:05:38.581590 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:05:38.605339 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:05:38.688011 systemd-fsck[928]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 12 00:05:38.696996 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:05:38.716291 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:05:38.778287 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:05:38.773947 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:05:38.779610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:05:38.821163 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:05:38.831015 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:05:38.838323 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:05:38.859636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:05:38.901794 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (939) Jul 12 00:05:38.901818 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:38.901829 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:38.901839 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:38.859684 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:05:38.880418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:05:38.924503 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:05:38.937573 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:38.931793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:05:39.014314 systemd-networkd[898]: enP38952s1: Gained IPv6LL Jul 12 00:05:39.142279 systemd-networkd[898]: eth0: Gained IPv6LL Jul 12 00:05:39.275644 coreos-metadata[941]: Jul 12 00:05:39.275 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:05:39.286974 coreos-metadata[941]: Jul 12 00:05:39.286 INFO Fetch successful Jul 12 00:05:39.292272 coreos-metadata[941]: Jul 12 00:05:39.292 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:05:39.304499 coreos-metadata[941]: Jul 12 00:05:39.304 INFO Fetch successful Jul 12 00:05:39.318059 coreos-metadata[941]: Jul 12 00:05:39.318 INFO wrote hostname ci-4081.3.4-n-0fb9ec6aad to /sysroot/etc/hostname Jul 12 00:05:39.327058 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:05:39.574099 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:05:39.631958 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:05:39.641279 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:05:39.650054 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:05:40.513900 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:05:40.527568 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:05:40.536244 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:05:40.553460 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:05:40.564100 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:40.583137 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:05:40.596816 ignition[1058]: INFO : Ignition 2.19.0 Jul 12 00:05:40.596816 ignition[1058]: INFO : Stage: mount Jul 12 00:05:40.611262 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:40.611262 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:40.611262 ignition[1058]: INFO : mount: mount passed Jul 12 00:05:40.611262 ignition[1058]: INFO : Ignition finished successfully Jul 12 00:05:40.602001 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:05:40.623231 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:05:40.639298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:05:40.679874 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1069) Jul 12 00:05:40.679922 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:40.690279 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:40.690308 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:40.696106 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:40.697955 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:05:40.728385 ignition[1087]: INFO : Ignition 2.19.0 Jul 12 00:05:40.732533 ignition[1087]: INFO : Stage: files Jul 12 00:05:40.732533 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:40.732533 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:40.732533 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:05:40.754610 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:05:40.754610 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:05:40.814378 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:05:40.814859 unknown[1087]: wrote ssh authorized keys file for user: core Jul 12 00:05:40.983559 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:05:41.272377 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:05:41.272377 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:05:41.787492 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:05:42.671737 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:05:42.671737 ignition[1087]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 12 00:05:42.705333 ignition[1087]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: files passed Jul 12 00:05:42.720175 ignition[1087]: INFO : Ignition finished successfully Jul 12 00:05:42.724960 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:05:42.769367 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:05:42.788262 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:05:42.887525 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:42.887525 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:42.816267 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:05:42.919195 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:42.816359 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:05:42.849621 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:05:42.856855 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:05:42.888347 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:05:42.929820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:05:42.929927 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:05:42.942517 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:05:42.955134 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:05:42.966529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:05:42.969285 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:05:43.012668 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:05:43.036397 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:05:43.059016 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:43.066655 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:43.081711 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:05:43.093297 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:05:43.093372 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:05:43.110456 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:05:43.116640 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:05:43.128315 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:05:43.140048 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:05:43.151571 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:05:43.163856 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:05:43.176982 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:43.190906 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:05:43.202179 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:05:43.214648 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:05:43.225364 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:05:43.225440 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:43.240637 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:43.246866 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:43.258646 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:05:43.263976 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:43.270933 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:05:43.271001 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:43.288552 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:05:43.288604 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:05:43.295777 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:05:43.295823 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:05:43.307131 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:05:43.307182 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:05:43.387380 ignition[1140]: INFO : Ignition 2.19.0 Jul 12 00:05:43.387380 ignition[1140]: INFO : Stage: umount Jul 12 00:05:43.387380 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:43.387380 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:43.387380 ignition[1140]: INFO : umount: umount passed Jul 12 00:05:43.387380 ignition[1140]: INFO : Ignition finished successfully Jul 12 00:05:43.342249 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:05:43.379185 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:05:43.391846 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:05:43.391917 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:43.409232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:05:43.409299 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:43.423293 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:05:43.423814 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:05:43.423911 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:05:43.431617 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:05:43.431713 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:05:43.447206 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:05:43.447306 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:05:43.458848 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:05:43.458905 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:05:43.471124 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:05:43.471170 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:05:43.483314 systemd[1]: Stopped target network.target - Network. Jul 12 00:05:43.494982 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:05:43.495039 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:05:43.508427 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:05:43.519601 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:05:43.526871 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:43.534622 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:05:43.544786 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:05:43.557112 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:05:43.557166 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:43.567529 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:05:43.567571 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:43.573043 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:05:43.573121 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:05:43.583653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:05:43.583700 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:43.594530 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:05:43.605432 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:05:43.616125 systemd-networkd[898]: eth0: DHCPv6 lease lost Jul 12 00:05:43.835638 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: Data path switched from VF: enP38952s1 Jul 12 00:05:43.622701 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:05:43.625125 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:05:43.634788 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:05:43.634827 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:43.658217 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:05:43.669467 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:05:43.669533 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:05:43.680985 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:43.697023 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:05:43.697207 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:05:43.733996 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:05:43.734112 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:43.744150 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:05:43.744207 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:43.755367 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:05:43.755419 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:43.770882 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:05:43.771039 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:43.783248 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:05:43.783318 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:43.794460 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:05:43.794492 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:43.805537 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:05:43.805584 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:43.831444 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:05:43.831500 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:43.846330 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:43.846399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:43.896349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:05:43.910294 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:05:43.910370 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:43.923599 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:05:43.923648 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:05:43.935586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:05:43.935631 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:43.948032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:43.948075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:43.959869 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:05:43.959968 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:05:43.974875 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:05:43.974996 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:05:43.985925 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:05:43.985999 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:05:43.998513 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:05:44.190619 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 12 00:05:44.008570 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:05:44.008647 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:05:44.038321 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:05:44.129692 systemd[1]: Switching root. Jul 12 00:05:44.210782 systemd-journald[217]: Journal stopped Jul 12 00:05:33.361970 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:05:33.361993 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:05:33.362001 kernel: KASLR enabled Jul 12 00:05:33.362006 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 12 00:05:33.362014 kernel: printk: bootconsole [pl11] enabled Jul 12 00:05:33.362019 kernel: efi: EFI v2.7 by EDK II Jul 12 00:05:33.362026 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jul 12 00:05:33.362033 kernel: random: crng init done Jul 12 00:05:33.362038 kernel: ACPI: Early table checksum verification disabled Jul 12 00:05:33.362044 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jul 12 00:05:33.362050 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362056 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362063 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 12 00:05:33.362070 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362077 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362083 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362090 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362098 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362104 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362110 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 12 00:05:33.362117 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 12 00:05:33.362123 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 12 00:05:33.362129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 12 00:05:33.362136 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 12 00:05:33.362142 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 12 00:05:33.362148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 12 00:05:33.362155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 12 00:05:33.362161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 12 00:05:33.362169 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 12 00:05:33.362175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 12 00:05:33.362182 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 12 00:05:33.362188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 12 00:05:33.362194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 12 00:05:33.362201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 12 00:05:33.362207 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jul 12 00:05:33.362213 kernel: Zone ranges: Jul 12 00:05:33.362219 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 12 00:05:33.362225 kernel: DMA32 empty Jul 12 00:05:33.362232 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:05:33.362238 kernel: Movable zone start for each node Jul 12 00:05:33.364303 kernel: Early memory node ranges Jul 12 00:05:33.364315 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 12 00:05:33.364323 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jul 12 00:05:33.364330 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jul 12 00:05:33.364337 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jul 12 00:05:33.364345 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jul 12 00:05:33.364352 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jul 12 00:05:33.364359 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 12 00:05:33.364366 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 12 00:05:33.364373 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 12 00:05:33.364380 kernel: psci: probing for conduit method from ACPI. Jul 12 00:05:33.364386 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:05:33.364393 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:05:33.364399 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 12 00:05:33.364406 kernel: psci: SMC Calling Convention v1.4 Jul 12 00:05:33.364412 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 12 00:05:33.364419 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 12 00:05:33.364427 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:05:33.364434 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:05:33.364441 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:05:33.364448 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:05:33.364454 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:05:33.364461 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:05:33.364468 kernel: CPU features: detected: Spectre-BHB Jul 12 00:05:33.364475 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:05:33.364481 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:05:33.364488 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:05:33.364495 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 12 00:05:33.364503 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:05:33.364510 kernel: alternatives: applying boot alternatives Jul 12 00:05:33.364518 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:33.364525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:05:33.364532 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:05:33.364539 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:05:33.364545 kernel: Fallback order for Node 0: 0 Jul 12 00:05:33.364552 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 12 00:05:33.364559 kernel: Policy zone: Normal Jul 12 00:05:33.364565 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:05:33.364572 kernel: software IO TLB: area num 2. Jul 12 00:05:33.364580 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jul 12 00:05:33.364587 kernel: Memory: 3982628K/4194160K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 211532K reserved, 0K cma-reserved) Jul 12 00:05:33.364594 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:05:33.364601 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:05:33.364608 kernel: rcu: RCU event tracing is enabled. Jul 12 00:05:33.364615 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:05:33.364622 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:05:33.364629 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:05:33.364636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:05:33.364643 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:05:33.364649 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:05:33.364658 kernel: GICv3: 960 SPIs implemented Jul 12 00:05:33.364664 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:05:33.364671 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:05:33.364677 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:05:33.364684 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 12 00:05:33.364691 kernel: ITS: No ITS available, not enabling LPIs Jul 12 00:05:33.364698 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:05:33.364704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:33.364711 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:05:33.364718 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:05:33.364725 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:05:33.364733 kernel: Console: colour dummy device 80x25 Jul 12 00:05:33.364741 kernel: printk: console [tty1] enabled Jul 12 00:05:33.364748 kernel: ACPI: Core revision 20230628 Jul 12 00:05:33.364755 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:05:33.364762 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:05:33.364768 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:05:33.364775 kernel: landlock: Up and running. Jul 12 00:05:33.364782 kernel: SELinux: Initializing. Jul 12 00:05:33.364789 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:33.364796 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:05:33.364805 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:33.364812 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:05:33.364819 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 12 00:05:33.364826 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Jul 12 00:05:33.364833 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 12 00:05:33.364840 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:05:33.364847 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:05:33.364860 kernel: Remapping and enabling EFI services. Jul 12 00:05:33.364867 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:05:33.364874 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:05:33.364881 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 12 00:05:33.364890 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:05:33.364897 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:05:33.364904 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:05:33.364912 kernel: SMP: Total of 2 processors activated. Jul 12 00:05:33.364919 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:05:33.364927 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 12 00:05:33.364935 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:05:33.364942 kernel: CPU features: detected: CRC32 instructions Jul 12 00:05:33.364950 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:05:33.364957 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:05:33.364964 kernel: CPU features: detected: Privileged Access Never Jul 12 00:05:33.364971 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:05:33.364978 kernel: alternatives: applying system-wide alternatives Jul 12 00:05:33.364985 kernel: devtmpfs: initialized Jul 12 00:05:33.364994 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:05:33.365002 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:05:33.365009 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:05:33.365016 kernel: SMBIOS 3.1.0 present. Jul 12 00:05:33.365024 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jul 12 00:05:33.365031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:05:33.365039 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:05:33.365046 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:05:33.365053 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:05:33.365062 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:05:33.365069 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jul 12 00:05:33.365076 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:05:33.365084 kernel: cpuidle: using governor menu Jul 12 00:05:33.365091 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:05:33.365098 kernel: ASID allocator initialised with 32768 entries Jul 12 00:05:33.365105 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:05:33.365112 kernel: Serial: AMBA PL011 UART driver Jul 12 00:05:33.365119 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:05:33.365128 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:05:33.365135 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:05:33.365142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:05:33.365149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:05:33.365157 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:05:33.365164 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:05:33.365171 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:05:33.365179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:05:33.365186 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:05:33.365194 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:05:33.365201 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:05:33.365209 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:05:33.365216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:05:33.365223 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:05:33.365231 kernel: ACPI: Interpreter enabled Jul 12 00:05:33.365238 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:05:33.365254 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:05:33.365265 kernel: printk: console [ttyAMA0] enabled Jul 12 00:05:33.365276 kernel: printk: bootconsole [pl11] disabled Jul 12 00:05:33.365283 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 12 00:05:33.365290 kernel: iommu: Default domain type: Translated Jul 12 00:05:33.365297 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:05:33.365305 kernel: efivars: Registered efivars operations Jul 12 00:05:33.365312 kernel: vgaarb: loaded Jul 12 00:05:33.365319 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:05:33.365326 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:05:33.365333 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:05:33.365342 kernel: pnp: PnP ACPI init Jul 12 00:05:33.365350 kernel: pnp: PnP ACPI: found 0 devices Jul 12 00:05:33.365357 kernel: NET: Registered PF_INET protocol family Jul 12 00:05:33.365364 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:05:33.365371 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:05:33.365379 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:05:33.365386 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:05:33.365393 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:05:33.365401 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:05:33.365409 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:33.365417 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:05:33.365424 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:05:33.365431 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:05:33.365438 kernel: kvm [1]: HYP mode not available Jul 12 00:05:33.365445 kernel: Initialise system trusted keyrings Jul 12 00:05:33.365453 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:05:33.365460 kernel: Key type asymmetric registered Jul 12 00:05:33.365467 kernel: Asymmetric key parser 'x509' registered Jul 12 00:05:33.365475 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:05:33.365483 kernel: io scheduler mq-deadline registered Jul 12 00:05:33.365490 kernel: io scheduler kyber registered Jul 12 00:05:33.365497 kernel: io scheduler bfq registered Jul 12 00:05:33.365504 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:05:33.365512 kernel: thunder_xcv, ver 1.0 Jul 12 00:05:33.365519 kernel: thunder_bgx, ver 1.0 Jul 12 00:05:33.365526 kernel: nicpf, ver 1.0 Jul 12 00:05:33.365533 kernel: nicvf, ver 1.0 Jul 12 00:05:33.365673 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:05:33.365745 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:05:32 UTC (1752278732) Jul 12 00:05:33.365756 kernel: efifb: probing for efifb Jul 12 00:05:33.365763 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 12 00:05:33.365771 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 12 00:05:33.365778 kernel: efifb: scrolling: redraw Jul 12 00:05:33.365785 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 12 00:05:33.365792 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:05:33.365801 kernel: fb0: EFI VGA frame buffer device Jul 12 00:05:33.365808 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 12 00:05:33.365816 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:05:33.365823 kernel: No ACPI PMU IRQ for CPU0 Jul 12 00:05:33.365830 kernel: No ACPI PMU IRQ for CPU1 Jul 12 00:05:33.365837 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 12 00:05:33.365844 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:05:33.365852 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:05:33.365859 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:05:33.365867 kernel: Segment Routing with IPv6 Jul 12 00:05:33.365874 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:05:33.365882 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:05:33.365889 kernel: Key type dns_resolver registered Jul 12 00:05:33.365896 kernel: registered taskstats version 1 Jul 12 00:05:33.365903 kernel: Loading compiled-in X.509 certificates Jul 12 00:05:33.365911 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:05:33.365918 kernel: Key type .fscrypt registered Jul 12 00:05:33.365925 kernel: Key type fscrypt-provisioning registered Jul 12 00:05:33.365933 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:05:33.365941 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:05:33.365948 kernel: ima: No architecture policies found Jul 12 00:05:33.365956 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:05:33.365963 kernel: clk: Disabling unused clocks Jul 12 00:05:33.365970 kernel: Freeing unused kernel memory: 39424K Jul 12 00:05:33.365977 kernel: Run /init as init process Jul 12 00:05:33.365984 kernel: with arguments: Jul 12 00:05:33.365991 kernel: /init Jul 12 00:05:33.366000 kernel: with environment: Jul 12 00:05:33.366007 kernel: HOME=/ Jul 12 00:05:33.366014 kernel: TERM=linux Jul 12 00:05:33.366021 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:05:33.366031 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:05:33.366040 systemd[1]: Detected virtualization microsoft. Jul 12 00:05:33.366048 systemd[1]: Detected architecture arm64. Jul 12 00:05:33.366055 systemd[1]: Running in initrd. Jul 12 00:05:33.366065 systemd[1]: No hostname configured, using default hostname. Jul 12 00:05:33.366072 systemd[1]: Hostname set to . Jul 12 00:05:33.366080 systemd[1]: Initializing machine ID from random generator. Jul 12 00:05:33.366088 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:05:33.366096 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:33.366103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:33.366112 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:05:33.366120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:05:33.366129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:05:33.366137 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:05:33.366146 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:05:33.366154 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:05:33.366162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:33.366170 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:33.366180 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:05:33.366187 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:05:33.366195 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:05:33.366203 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:05:33.366211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:33.366218 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:33.366226 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:05:33.366234 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:05:33.366242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:33.372426 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:33.372436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:33.372444 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:05:33.372452 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:05:33.372460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:05:33.372468 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:05:33.372475 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:05:33.372483 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:05:33.372491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:05:33.372525 systemd-journald[217]: Collecting audit messages is disabled. Jul 12 00:05:33.372545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:33.372559 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:05:33.372571 systemd-journald[217]: Journal started Jul 12 00:05:33.372590 systemd-journald[217]: Runtime Journal (/run/log/journal/b89b6c0009c84ffe976fa3961561d88e) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:05:33.388290 kernel: Bridge firewalling registered Jul 12 00:05:33.361346 systemd-modules-load[218]: Inserted module 'overlay' Jul 12 00:05:33.387667 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 12 00:05:33.411576 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:05:33.412486 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:33.427062 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:33.435651 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:05:33.447217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:33.457666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:33.477482 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:33.492428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:05:33.511725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:05:33.525453 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:05:33.538311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:33.561173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:33.570580 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:05:33.583793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:33.612670 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:05:33.626868 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:05:33.641365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:05:33.663900 dracut-cmdline[252]: dracut-dracut-053 Jul 12 00:05:33.680491 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:05:33.671115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:33.674224 systemd-resolved[257]: Positive Trust Anchors: Jul 12 00:05:33.674234 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:05:33.674284 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:05:33.676424 systemd-resolved[257]: Defaulting to hostname 'linux'. Jul 12 00:05:33.682114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:05:33.723176 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:33.812265 kernel: SCSI subsystem initialized Jul 12 00:05:33.820260 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:05:33.831412 kernel: iscsi: registered transport (tcp) Jul 12 00:05:33.850416 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:05:33.850433 kernel: QLogic iSCSI HBA Driver Jul 12 00:05:33.889659 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:33.904451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:05:33.937974 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:05:33.938040 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:05:33.944681 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:05:33.993272 kernel: raid6: neonx8 gen() 15758 MB/s Jul 12 00:05:34.013258 kernel: raid6: neonx4 gen() 15670 MB/s Jul 12 00:05:34.033255 kernel: raid6: neonx2 gen() 13236 MB/s Jul 12 00:05:34.054256 kernel: raid6: neonx1 gen() 10480 MB/s Jul 12 00:05:34.074259 kernel: raid6: int64x8 gen() 6960 MB/s Jul 12 00:05:34.094254 kernel: raid6: int64x4 gen() 7353 MB/s Jul 12 00:05:34.115256 kernel: raid6: int64x2 gen() 6133 MB/s Jul 12 00:05:34.139446 kernel: raid6: int64x1 gen() 5061 MB/s Jul 12 00:05:34.139457 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Jul 12 00:05:34.164186 kernel: raid6: .... xor() 11957 MB/s, rmw enabled Jul 12 00:05:34.164207 kernel: raid6: using neon recovery algorithm Jul 12 00:05:34.176937 kernel: xor: measuring software checksum speed Jul 12 00:05:34.176955 kernel: 8regs : 19693 MB/sec Jul 12 00:05:34.184203 kernel: 32regs : 18628 MB/sec Jul 12 00:05:34.184214 kernel: arm64_neon : 27114 MB/sec Jul 12 00:05:34.188643 kernel: xor: using function: arm64_neon (27114 MB/sec) Jul 12 00:05:34.239262 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:05:34.250290 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:34.266394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:34.290583 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jul 12 00:05:34.296996 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:34.318447 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:05:34.331268 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Jul 12 00:05:34.356441 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:34.375518 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:05:34.410237 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:34.436865 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:05:34.466681 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:34.480790 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:34.496126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:34.510391 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:05:34.528265 kernel: hv_vmbus: Vmbus version:5.3 Jul 12 00:05:34.532448 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:05:34.725723 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 12 00:05:34.725746 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:05:34.725756 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:05:34.725774 kernel: PTP clock support registered Jul 12 00:05:34.725784 kernel: hv_vmbus: registering driver hid_hyperv Jul 12 00:05:34.725793 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 12 00:05:34.725803 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 12 00:05:34.725812 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 12 00:05:34.725960 kernel: hv_utils: Registering HyperV Utility Driver Jul 12 00:05:34.725970 kernel: hv_vmbus: registering driver hv_utils Jul 12 00:05:34.725979 kernel: hv_utils: Heartbeat IC version 3.0 Jul 12 00:05:34.725991 kernel: hv_utils: Shutdown IC version 3.2 Jul 12 00:05:34.726000 kernel: hv_utils: TimeSync IC version 4.0 Jul 12 00:05:34.726009 kernel: hv_vmbus: registering driver hv_storvsc Jul 12 00:05:34.726018 kernel: hv_vmbus: registering driver hv_netvsc Jul 12 00:05:34.726027 kernel: scsi host1: storvsc_host_t Jul 12 00:05:34.726142 kernel: scsi host0: storvsc_host_t Jul 12 00:05:34.726223 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 12 00:05:34.726247 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 12 00:05:34.624145 systemd-resolved[257]: Clock change detected. Flushing caches. Jul 12 00:05:34.721127 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:34.768929 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: VF slot 1 added Jul 12 00:05:34.770240 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 12 00:05:34.770354 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:05:34.737640 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:34.786303 kernel: hv_vmbus: registering driver hv_pci Jul 12 00:05:34.737770 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:34.808199 kernel: hv_pci 14e00f9b-9828-48da-9a76-49d110ab5156: PCI VMBus probing: Using version 0x10004 Jul 12 00:05:34.808359 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 12 00:05:34.768670 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:34.838738 kernel: hv_pci 14e00f9b-9828-48da-9a76-49d110ab5156: PCI host bridge to bus 9828:00 Jul 12 00:05:34.838901 kernel: pci_bus 9828:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 12 00:05:34.839000 kernel: pci_bus 9828:00: No busn resource found for root bus, will use [bus 00-ff] Jul 12 00:05:34.776995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:34.777179 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:34.863934 kernel: pci 9828:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 12 00:05:34.863974 kernel: pci 9828:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:05:34.795186 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:34.930183 kernel: pci 9828:00:02.0: enabling Extended Tags Jul 12 00:05:34.930358 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 12 00:05:34.930466 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 12 00:05:34.930549 kernel: pci 9828:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9828:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 12 00:05:34.930632 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 12 00:05:34.930735 kernel: pci_bus 9828:00: busn_res: [bus 00-ff] end is updated to 00 Jul 12 00:05:34.930820 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 12 00:05:34.930902 kernel: pci 9828:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 12 00:05:34.930985 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 12 00:05:34.852793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:34.950476 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:34.893916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:34.960539 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 12 00:05:34.961253 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:05:34.995419 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:35.025081 kernel: mlx5_core 9828:00:02.0: enabling device (0000 -> 0002) Jul 12 00:05:35.032107 kernel: mlx5_core 9828:00:02.0: firmware version: 16.31.2424 Jul 12 00:05:35.311619 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: VF registering: eth1 Jul 12 00:05:35.311825 kernel: mlx5_core 9828:00:02.0 eth1: joined to eth0 Jul 12 00:05:35.321182 kernel: mlx5_core 9828:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 12 00:05:35.336133 kernel: mlx5_core 9828:00:02.0 enP38952s1: renamed from eth1 Jul 12 00:05:35.507114 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (502) Jul 12 00:05:35.515267 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 12 00:05:35.528193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:05:35.571205 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (490) Jul 12 00:05:35.577829 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 12 00:05:35.596472 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 12 00:05:35.603487 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 12 00:05:35.628319 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:05:35.652109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:35.659105 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:36.668977 disk-uuid[601]: The operation has completed successfully. Jul 12 00:05:36.674263 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:05:36.724079 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:05:36.724195 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:05:36.752231 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:05:36.766055 sh[714]: Success Jul 12 00:05:36.795344 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:05:36.978570 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:05:36.984578 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:05:36.999249 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:05:37.030941 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:05:37.030994 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:37.038096 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:05:37.043540 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:05:37.048893 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:05:37.295737 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:05:37.301459 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:05:37.318418 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:05:37.324245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:05:37.363008 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:37.363033 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:37.367628 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:37.410119 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:37.425436 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:05:37.431123 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:37.439277 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:05:37.445390 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:05:37.466556 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:05:37.479309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:05:37.510690 systemd-networkd[898]: lo: Link UP Jul 12 00:05:37.510704 systemd-networkd[898]: lo: Gained carrier Jul 12 00:05:37.512318 systemd-networkd[898]: Enumeration completed Jul 12 00:05:37.512435 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:05:37.521720 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:37.521724 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:05:37.522727 systemd[1]: Reached target network.target - Network. Jul 12 00:05:37.615109 kernel: mlx5_core 9828:00:02.0 enP38952s1: Link up Jul 12 00:05:37.694112 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: Data path switched to VF: enP38952s1 Jul 12 00:05:37.694743 systemd-networkd[898]: enP38952s1: Link UP Jul 12 00:05:37.694979 systemd-networkd[898]: eth0: Link UP Jul 12 00:05:37.695413 systemd-networkd[898]: eth0: Gained carrier Jul 12 00:05:37.695422 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:37.703597 systemd-networkd[898]: enP38952s1: Gained carrier Jul 12 00:05:37.730147 systemd-networkd[898]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:05:38.216170 ignition[897]: Ignition 2.19.0 Jul 12 00:05:38.216181 ignition[897]: Stage: fetch-offline Jul 12 00:05:38.220851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:05:38.216216 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:38.216224 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:38.216311 ignition[897]: parsed url from cmdline: "" Jul 12 00:05:38.216314 ignition[897]: no config URL provided Jul 12 00:05:38.216318 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:05:38.248420 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:05:38.216324 ignition[897]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:05:38.216329 ignition[897]: failed to fetch config: resource requires networking Jul 12 00:05:38.216567 ignition[897]: Ignition finished successfully Jul 12 00:05:38.272492 ignition[907]: Ignition 2.19.0 Jul 12 00:05:38.272499 ignition[907]: Stage: fetch Jul 12 00:05:38.272792 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:38.272804 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:38.272928 ignition[907]: parsed url from cmdline: "" Jul 12 00:05:38.272932 ignition[907]: no config URL provided Jul 12 00:05:38.272937 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:05:38.272945 ignition[907]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:05:38.272977 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 12 00:05:38.423639 ignition[907]: GET result: OK Jul 12 00:05:38.423748 ignition[907]: config has been read from IMDS userdata Jul 12 00:05:38.423831 ignition[907]: parsing config with SHA512: 1dd38187578391c8e9e438aa2855f1af9e725013ec2bc256105808904f8621ae2416a143bfc2080881bae91fafbaf2ffe290a639ff30dcc1c0ac588c02d20d86 Jul 12 00:05:38.428027 unknown[907]: fetched base config from "system" Jul 12 00:05:38.428500 ignition[907]: fetch: fetch complete Jul 12 00:05:38.428034 unknown[907]: fetched base config from "system" Jul 12 00:05:38.428505 ignition[907]: fetch: fetch passed Jul 12 00:05:38.428039 unknown[907]: fetched user config from "azure" Jul 12 00:05:38.428559 ignition[907]: Ignition finished successfully Jul 12 00:05:38.433906 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:05:38.461234 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:05:38.477571 ignition[913]: Ignition 2.19.0 Jul 12 00:05:38.477580 ignition[913]: Stage: kargs Jul 12 00:05:38.484638 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:05:38.477798 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:38.477807 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:38.479319 ignition[913]: kargs: kargs passed Jul 12 00:05:38.479367 ignition[913]: Ignition finished successfully Jul 12 00:05:38.511317 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:05:38.531646 ignition[919]: Ignition 2.19.0 Jul 12 00:05:38.531658 ignition[919]: Stage: disks Jul 12 00:05:38.534945 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:05:38.531832 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:38.541533 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:05:38.531841 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:38.548247 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:05:38.532757 ignition[919]: disks: disks passed Jul 12 00:05:38.559936 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:05:38.532800 ignition[919]: Ignition finished successfully Jul 12 00:05:38.570997 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:05:38.581590 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:05:38.605339 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:05:38.688011 systemd-fsck[928]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 12 00:05:38.696996 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:05:38.716291 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:05:38.778287 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:05:38.773947 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:05:38.779610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:05:38.821163 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:05:38.831015 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:05:38.838323 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:05:38.859636 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:05:38.901794 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (939) Jul 12 00:05:38.901818 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:38.901829 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:38.901839 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:38.859684 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:05:38.880418 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:05:38.924503 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:05:38.937573 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:38.931793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:05:39.014314 systemd-networkd[898]: enP38952s1: Gained IPv6LL Jul 12 00:05:39.142279 systemd-networkd[898]: eth0: Gained IPv6LL Jul 12 00:05:39.275644 coreos-metadata[941]: Jul 12 00:05:39.275 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:05:39.286974 coreos-metadata[941]: Jul 12 00:05:39.286 INFO Fetch successful Jul 12 00:05:39.292272 coreos-metadata[941]: Jul 12 00:05:39.292 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:05:39.304499 coreos-metadata[941]: Jul 12 00:05:39.304 INFO Fetch successful Jul 12 00:05:39.318059 coreos-metadata[941]: Jul 12 00:05:39.318 INFO wrote hostname ci-4081.3.4-n-0fb9ec6aad to /sysroot/etc/hostname Jul 12 00:05:39.327058 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:05:39.574099 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:05:39.631958 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:05:39.641279 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:05:39.650054 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:05:40.513900 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:05:40.527568 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:05:40.536244 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:05:40.553460 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:05:40.564100 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:40.583137 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:05:40.596816 ignition[1058]: INFO : Ignition 2.19.0 Jul 12 00:05:40.596816 ignition[1058]: INFO : Stage: mount Jul 12 00:05:40.611262 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:40.611262 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:40.611262 ignition[1058]: INFO : mount: mount passed Jul 12 00:05:40.611262 ignition[1058]: INFO : Ignition finished successfully Jul 12 00:05:40.602001 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:05:40.623231 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:05:40.639298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:05:40.679874 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1069) Jul 12 00:05:40.679922 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:05:40.690279 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:05:40.690308 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:05:40.696106 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:05:40.697955 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:05:40.728385 ignition[1087]: INFO : Ignition 2.19.0 Jul 12 00:05:40.732533 ignition[1087]: INFO : Stage: files Jul 12 00:05:40.732533 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:40.732533 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:40.732533 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:05:40.754610 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:05:40.754610 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:05:40.814378 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:05:40.821778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:05:40.814859 unknown[1087]: wrote ssh authorized keys file for user: core Jul 12 00:05:40.983559 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:05:41.272377 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:05:41.272377 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:05:41.292959 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:05:41.787492 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:05:42.671737 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:05:42.671737 ignition[1087]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 12 00:05:42.705333 ignition[1087]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:05:42.720175 ignition[1087]: INFO : files: files passed Jul 12 00:05:42.720175 ignition[1087]: INFO : Ignition finished successfully Jul 12 00:05:42.724960 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:05:42.769367 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:05:42.788262 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:05:42.887525 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:42.887525 initrd-setup-root-after-ignition[1114]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:42.816267 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:05:42.919195 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:05:42.816359 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:05:42.849621 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:05:42.856855 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:05:42.888347 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:05:42.929820 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:05:42.929927 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:05:42.942517 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:05:42.955134 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:05:42.966529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:05:42.969285 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:05:43.012668 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:05:43.036397 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:05:43.059016 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:43.066655 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:43.081711 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:05:43.093297 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:05:43.093372 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:05:43.110456 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:05:43.116640 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:05:43.128315 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:05:43.140048 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:05:43.151571 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:05:43.163856 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:05:43.176982 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:05:43.190906 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:05:43.202179 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:05:43.214648 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:05:43.225364 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:05:43.225440 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:05:43.240637 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:43.246866 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:43.258646 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:05:43.263976 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:43.270933 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:05:43.271001 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:05:43.288552 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:05:43.288604 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:05:43.295777 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:05:43.295823 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:05:43.307131 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:05:43.307182 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:05:43.387380 ignition[1140]: INFO : Ignition 2.19.0 Jul 12 00:05:43.387380 ignition[1140]: INFO : Stage: umount Jul 12 00:05:43.387380 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:05:43.387380 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 12 00:05:43.387380 ignition[1140]: INFO : umount: umount passed Jul 12 00:05:43.387380 ignition[1140]: INFO : Ignition finished successfully Jul 12 00:05:43.342249 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:05:43.379185 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:05:43.391846 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:05:43.391917 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:43.409232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:05:43.409299 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:05:43.423293 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:05:43.423814 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:05:43.423911 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:05:43.431617 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:05:43.431713 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:05:43.447206 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:05:43.447306 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:05:43.458848 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:05:43.458905 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:05:43.471124 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:05:43.471170 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:05:43.483314 systemd[1]: Stopped target network.target - Network. Jul 12 00:05:43.494982 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:05:43.495039 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:05:43.508427 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:05:43.519601 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:05:43.526871 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:43.534622 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:05:43.544786 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:05:43.557112 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:05:43.557166 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:05:43.567529 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:05:43.567571 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:05:43.573043 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:05:43.573121 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:05:43.583653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:05:43.583700 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:05:43.594530 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:05:43.605432 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:05:43.616125 systemd-networkd[898]: eth0: DHCPv6 lease lost Jul 12 00:05:43.835638 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: Data path switched from VF: enP38952s1 Jul 12 00:05:43.622701 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:05:43.625125 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:05:43.634788 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:05:43.634827 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:43.658217 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:05:43.669467 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:05:43.669533 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:05:43.680985 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:43.697023 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:05:43.697207 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:05:43.733996 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:05:43.734112 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:43.744150 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:05:43.744207 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:43.755367 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:05:43.755419 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:43.770882 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:05:43.771039 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:43.783248 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:05:43.783318 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:43.794460 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:05:43.794492 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:43.805537 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:05:43.805584 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:05:43.831444 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:05:43.831500 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:05:43.846330 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:05:43.846399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:05:43.896349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:05:43.910294 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:05:43.910370 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:43.923599 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:05:43.923648 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:05:43.935586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:05:43.935631 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:43.948032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:43.948075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:43.959869 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:05:43.959968 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:05:43.974875 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:05:43.974996 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:05:43.985925 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:05:43.985999 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:05:43.998513 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:05:44.190619 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jul 12 00:05:44.008570 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:05:44.008647 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:05:44.038321 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:05:44.129692 systemd[1]: Switching root. Jul 12 00:05:44.210782 systemd-journald[217]: Journal stopped Jul 12 00:05:48.214286 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:05:48.214320 kernel: SELinux: policy capability open_perms=1 Jul 12 00:05:48.214331 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:05:48.214340 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:05:48.214352 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:05:48.214360 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:05:48.214369 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:05:48.214378 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:05:48.214386 kernel: audit: type=1403 audit(1752278745.435:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:05:48.214396 systemd[1]: Successfully loaded SELinux policy in 131.552ms. Jul 12 00:05:48.214408 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.478ms. Jul 12 00:05:48.214418 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:05:48.214428 systemd[1]: Detected virtualization microsoft. Jul 12 00:05:48.214437 systemd[1]: Detected architecture arm64. Jul 12 00:05:48.214446 systemd[1]: Detected first boot. Jul 12 00:05:48.214458 systemd[1]: Hostname set to . Jul 12 00:05:48.214467 systemd[1]: Initializing machine ID from random generator. Jul 12 00:05:48.214476 zram_generator::config[1198]: No configuration found. Jul 12 00:05:48.214486 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:05:48.214496 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:05:48.214505 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 12 00:05:48.214515 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:05:48.214527 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:05:48.214538 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:05:48.214554 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:05:48.214565 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:05:48.214575 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:05:48.214585 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:05:48.214594 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:05:48.214606 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:05:48.214615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:05:48.214625 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:05:48.214635 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:05:48.214644 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:05:48.214654 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:05:48.214663 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:05:48.214673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:05:48.214684 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:05:48.214693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:05:48.214703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:05:48.214715 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:05:48.214725 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:05:48.214735 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:05:48.214745 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:05:48.214756 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:05:48.214767 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:05:48.214777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:05:48.214787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:05:48.214797 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:05:48.214807 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:05:48.214818 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:05:48.214828 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:05:48.214838 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:05:48.214848 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:05:48.214858 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:05:48.214868 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:05:48.214878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:05:48.214888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:05:48.214899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:05:48.214910 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:05:48.214920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:05:48.214930 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:05:48.214940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:05:48.214950 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:05:48.214960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:05:48.214972 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:05:48.214983 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 12 00:05:48.214994 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 12 00:05:48.215003 kernel: fuse: init (API version 7.39) Jul 12 00:05:48.215013 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:05:48.215022 kernel: loop: module loaded Jul 12 00:05:48.215031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:05:48.215064 systemd-journald[1316]: Collecting audit messages is disabled. Jul 12 00:05:48.215115 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:05:48.215127 systemd-journald[1316]: Journal started Jul 12 00:05:48.215148 systemd-journald[1316]: Runtime Journal (/run/log/journal/c17f628b68104f9292a10f68a6904bbb) is 8.0M, max 78.5M, 70.5M free. Jul 12 00:05:48.235104 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:05:48.235174 kernel: ACPI: bus type drm_connector registered Jul 12 00:05:48.260541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:05:48.273247 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:05:48.274534 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:05:48.280806 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:05:48.287341 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:05:48.292738 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:05:48.298926 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:05:48.307150 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:05:48.312751 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:05:48.319535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:05:48.326543 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:05:48.326694 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:05:48.333607 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:05:48.333749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:05:48.340359 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:05:48.340498 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:05:48.346573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:05:48.346709 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:05:48.353576 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:05:48.353713 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:05:48.360249 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:05:48.362262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:05:48.368805 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:05:48.375159 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:05:48.382803 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:05:48.390205 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:05:48.406142 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:05:48.419198 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:05:48.426310 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:05:48.432476 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:05:48.436277 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:05:48.443344 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:05:48.449648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:05:48.450814 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:05:48.456828 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:05:48.458077 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:05:48.465243 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:05:48.475730 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:05:48.483947 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:05:48.491007 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:05:48.501375 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:05:48.507482 systemd-journald[1316]: Time spent on flushing to /var/log/journal/c17f628b68104f9292a10f68a6904bbb is 14.085ms for 885 entries. Jul 12 00:05:48.507482 systemd-journald[1316]: System Journal (/var/log/journal/c17f628b68104f9292a10f68a6904bbb) is 8.0M, max 2.6G, 2.6G free. Jul 12 00:05:48.545545 systemd-journald[1316]: Received client request to flush runtime journal. Jul 12 00:05:48.515901 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:05:48.523315 udevadm[1358]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:05:48.547367 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:05:48.576463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:05:48.583842 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jul 12 00:05:48.583856 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jul 12 00:05:48.590483 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:05:48.600329 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:05:48.763180 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:05:48.777393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:05:48.793288 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jul 12 00:05:48.793303 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Jul 12 00:05:48.797604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:05:49.643104 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:05:49.657233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:05:49.677383 systemd-udevd[1382]: Using default interface naming scheme 'v255'. Jul 12 00:05:49.869391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:05:49.886539 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:05:49.922973 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 12 00:05:49.953011 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:05:50.008129 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:05:50.014499 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:05:50.061114 kernel: hv_vmbus: registering driver hv_balloon Jul 12 00:05:50.077318 kernel: hv_vmbus: registering driver hyperv_fb Jul 12 00:05:50.077374 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 12 00:05:50.077387 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 12 00:05:50.077399 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 12 00:05:50.077410 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 12 00:05:50.094112 kernel: Console: switching to colour dummy device 80x25 Jul 12 00:05:50.101314 kernel: Console: switching to colour frame buffer device 128x48 Jul 12 00:05:50.120427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:50.144775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:05:50.145024 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:50.161131 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1386) Jul 12 00:05:50.171260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:05:50.181117 systemd-networkd[1392]: lo: Link UP Jul 12 00:05:50.182177 systemd-networkd[1392]: lo: Gained carrier Jul 12 00:05:50.183923 systemd-networkd[1392]: Enumeration completed Jul 12 00:05:50.185327 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:50.185330 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:05:50.194611 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:05:50.219244 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:05:50.250403 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 12 00:05:50.266108 kernel: mlx5_core 9828:00:02.0 enP38952s1: Link up Jul 12 00:05:50.311225 kernel: hv_netvsc 000d3aff-8062-000d-3aff-8062000d3aff eth0: Data path switched to VF: enP38952s1 Jul 12 00:05:50.311865 systemd-networkd[1392]: enP38952s1: Link UP Jul 12 00:05:50.311956 systemd-networkd[1392]: eth0: Link UP Jul 12 00:05:50.311959 systemd-networkd[1392]: eth0: Gained carrier Jul 12 00:05:50.311972 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:05:50.315370 systemd-networkd[1392]: enP38952s1: Gained carrier Jul 12 00:05:50.320150 systemd-networkd[1392]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:05:50.376566 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:05:50.389250 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:05:50.462153 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:05:50.504477 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:05:50.512315 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:05:50.524251 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:05:50.527964 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:05:50.556873 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:05:50.563764 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:05:50.570528 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:05:50.570555 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:05:50.577590 systemd[1]: Reached target machines.target - Containers. Jul 12 00:05:50.584656 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:05:50.596209 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:05:50.603579 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:05:50.609931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:05:50.613289 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:05:50.621027 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:05:50.630244 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:05:50.641822 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:05:50.659596 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:05:50.677459 kernel: loop0: detected capacity change from 0 to 31320 Jul 12 00:05:50.702946 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:05:50.703656 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:05:50.752508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:05:50.995122 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:05:51.067226 kernel: loop1: detected capacity change from 0 to 114432 Jul 12 00:05:51.494221 systemd-networkd[1392]: eth0: Gained IPv6LL Jul 12 00:05:51.497592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:05:51.578108 kernel: loop2: detected capacity change from 0 to 203944 Jul 12 00:05:51.823121 kernel: loop3: detected capacity change from 0 to 114328 Jul 12 00:05:52.326265 systemd-networkd[1392]: enP38952s1: Gained IPv6LL Jul 12 00:05:52.689116 kernel: loop4: detected capacity change from 0 to 31320 Jul 12 00:05:52.698132 kernel: loop5: detected capacity change from 0 to 114432 Jul 12 00:05:52.708269 kernel: loop6: detected capacity change from 0 to 203944 Jul 12 00:05:52.717116 kernel: loop7: detected capacity change from 0 to 114328 Jul 12 00:05:52.719647 (sd-merge)[1502]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 12 00:05:52.720047 (sd-merge)[1502]: Merged extensions into '/usr'. Jul 12 00:05:52.723852 systemd[1]: Reloading requested from client PID 1483 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:05:52.723871 systemd[1]: Reloading... Jul 12 00:05:52.783116 zram_generator::config[1538]: No configuration found. Jul 12 00:05:53.006069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:05:53.078574 systemd[1]: Reloading finished in 354 ms. Jul 12 00:05:53.099931 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:05:53.113233 systemd[1]: Starting ensure-sysext.service... Jul 12 00:05:53.119237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:05:53.127658 systemd[1]: Reloading requested from client PID 1590 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:05:53.127676 systemd[1]: Reloading... Jul 12 00:05:53.138017 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:05:53.138300 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:05:53.138924 systemd-tmpfiles[1591]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:05:53.139160 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Jul 12 00:05:53.139206 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Jul 12 00:05:53.142199 systemd-tmpfiles[1591]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:05:53.142210 systemd-tmpfiles[1591]: Skipping /boot Jul 12 00:05:53.149513 systemd-tmpfiles[1591]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:05:53.149528 systemd-tmpfiles[1591]: Skipping /boot Jul 12 00:05:53.203469 zram_generator::config[1620]: No configuration found. Jul 12 00:05:53.311938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:05:53.386544 systemd[1]: Reloading finished in 258 ms. Jul 12 00:05:53.404054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:05:53.421336 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:05:53.429298 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:05:53.438844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:05:53.452301 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:05:53.459217 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:05:53.478781 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:05:53.486467 systemd[1]: Finished ensure-sysext.service. Jul 12 00:05:53.493961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:05:53.502219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:05:53.519289 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:05:53.528352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:05:53.550267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:05:53.556079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:05:53.556151 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:05:53.562359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:05:53.562519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:05:53.568959 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:05:53.569127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:05:53.575406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:05:53.575556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:05:53.582751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:05:53.582959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:05:53.591628 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:05:53.591725 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:05:53.686934 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:05:53.689723 systemd-resolved[1689]: Positive Trust Anchors: Jul 12 00:05:53.689737 systemd-resolved[1689]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:05:53.689771 systemd-resolved[1689]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:05:53.963342 systemd-resolved[1689]: Using system hostname 'ci-4081.3.4-n-0fb9ec6aad'. Jul 12 00:05:53.965101 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:05:53.972871 systemd[1]: Reached target network.target - Network. Jul 12 00:05:53.978221 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:05:53.984753 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:05:54.089910 augenrules[1724]: No rules Jul 12 00:05:54.091637 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:05:57.326013 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:05:57.333947 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:06:05.477209 ldconfig[1479]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:06:05.489582 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:06:05.501297 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:06:05.773472 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:06:05.780765 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:06:05.786974 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:06:05.794057 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:06:05.802310 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:06:05.809000 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:06:05.816191 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:06:05.823378 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:06:05.823417 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:06:05.828606 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:06:05.834662 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:06:05.842619 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:06:06.064544 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:06:06.070987 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:06:06.077200 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:06:06.082712 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:06:06.088305 systemd[1]: System is tainted: cgroupsv1 Jul 12 00:06:06.088359 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:06:06.088382 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:06:06.101216 systemd[1]: Starting chronyd.service - NTP client/server... Jul 12 00:06:06.109211 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:06:06.128238 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:06:06.136083 (chronyd)[1740]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 12 00:06:06.138427 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:06:06.144859 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:06:06.154302 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:06:06.154958 jq[1747]: false Jul 12 00:06:06.160253 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:06:06.160306 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jul 12 00:06:06.162251 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 12 00:06:06.168726 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 12 00:06:06.170527 KVP[1750]: KVP starting; pid is:1750 Jul 12 00:06:06.170917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:06.183326 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:06:06.190895 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:06:06.200231 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:06:06.200768 KVP[1750]: KVP LIC Version: 3.1 Jul 12 00:06:06.206448 kernel: hv_utils: KVP IC version 4.0 Jul 12 00:06:06.218406 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:06:06.228288 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:06:06.236489 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:06:06.243459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:06:06.246269 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:06:06.256250 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:06:06.268657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:06:06.268887 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:06:06.271128 jq[1766]: true Jul 12 00:06:06.281678 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:06:06.285550 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:06:06.315454 jq[1771]: true Jul 12 00:06:06.420507 chronyd[1797]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 12 00:06:06.489894 systemd-logind[1763]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 12 00:06:06.491242 systemd-logind[1763]: New seat seat0. Jul 12 00:06:06.491944 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:06:06.516805 chronyd[1797]: Timezone right/UTC failed leap second check, ignoring Jul 12 00:06:06.517420 chronyd[1797]: Loaded seccomp filter (level 2) Jul 12 00:06:06.519632 systemd[1]: Started chronyd.service - NTP client/server. Jul 12 00:06:06.540082 tar[1770]: linux-arm64/helm Jul 12 00:06:06.542254 extend-filesystems[1748]: Found loop4 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found loop5 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found loop6 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found loop7 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found sda Jul 12 00:06:06.542254 extend-filesystems[1748]: Found sda1 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found sda2 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found sda3 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found usr Jul 12 00:06:06.542254 extend-filesystems[1748]: Found sda4 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found sda6 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found sda7 Jul 12 00:06:06.542254 extend-filesystems[1748]: Found sda9 Jul 12 00:06:06.542254 extend-filesystems[1748]: Checking size of /dev/sda9 Jul 12 00:06:06.543300 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:06:06.873543 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:06:06.873323 dbus-daemon[1744]: [system] SELinux support is enabled Jul 12 00:06:06.887612 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:06:06.887657 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:06:06.900694 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:06:06.900717 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:06:06.916208 extend-filesystems[1748]: Old size kept for /dev/sda9 Jul 12 00:06:06.916208 extend-filesystems[1748]: Found sr0 Jul 12 00:06:06.916060 dbus-daemon[1744]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 12 00:06:06.946514 bash[1794]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:06:06.921631 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:06:06.946673 update_engine[1765]: I20250712 00:06:06.930481 1765 main.cc:92] Flatcar Update Engine starting Jul 12 00:06:06.946673 update_engine[1765]: I20250712 00:06:06.937373 1765 update_check_scheduler.cc:74] Next update check in 5m41s Jul 12 00:06:06.956544 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:06:06.956799 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:06:06.965566 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:06:06.972510 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:06:06.973320 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:06:06.980411 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:06:06.991715 tar[1770]: linux-arm64/LICENSE Jul 12 00:06:06.991903 tar[1770]: linux-arm64/README.md Jul 12 00:06:07.017115 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1800) Jul 12 00:06:07.018931 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:06:07.203395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:07.203849 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:07.230962 coreos-metadata[1743]: Jul 12 00:06:07.230 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 12 00:06:07.233460 coreos-metadata[1743]: Jul 12 00:06:07.233 INFO Fetch successful Jul 12 00:06:07.235615 coreos-metadata[1743]: Jul 12 00:06:07.235 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 12 00:06:07.237796 coreos-metadata[1743]: Jul 12 00:06:07.237 INFO Fetch successful Jul 12 00:06:07.239068 coreos-metadata[1743]: Jul 12 00:06:07.239 INFO Fetching http://168.63.129.16/machine/9ed74b05-ca68-46cd-bfa7-e13e61249182/547e0eaa%2D704f%2D4483%2Dad4b%2Ddd8a4a1adae9.%5Fci%2D4081.3.4%2Dn%2D0fb9ec6aad?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 12 00:06:07.240982 coreos-metadata[1743]: Jul 12 00:06:07.240 INFO Fetch successful Jul 12 00:06:07.241239 coreos-metadata[1743]: Jul 12 00:06:07.241 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 12 00:06:07.255550 coreos-metadata[1743]: Jul 12 00:06:07.254 INFO Fetch successful Jul 12 00:06:07.359730 sshd_keygen[1808]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:06:07.382927 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:06:07.395769 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:06:07.406070 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 12 00:06:07.413123 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:06:07.414834 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:06:07.624001 kubelet[1867]: E0712 00:06:07.623901 1867 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:07.627340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:07.627759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:07.676533 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:06:07.694686 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:06:07.695327 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:06:07.698606 (ntainerd)[1906]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:06:07.707143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:06:07.716363 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:06:07.724355 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 12 00:06:07.731954 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:06:07.743435 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:06:07.759444 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:06:07.767443 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:06:07.777243 locksmithd[1828]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:06:09.551667 containerd[1906]: time="2025-07-12T00:06:09.551585160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:06:09.576807 containerd[1906]: time="2025-07-12T00:06:09.576682640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:09.578057 containerd[1906]: time="2025-07-12T00:06:09.578022720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:09.578102 containerd[1906]: time="2025-07-12T00:06:09.578058760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:06:09.578102 containerd[1906]: time="2025-07-12T00:06:09.578075520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:06:09.578266 containerd[1906]: time="2025-07-12T00:06:09.578241320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:06:09.578292 containerd[1906]: time="2025-07-12T00:06:09.578267080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:09.578350 containerd[1906]: time="2025-07-12T00:06:09.578328160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:09.578376 containerd[1906]: time="2025-07-12T00:06:09.578350920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:09.579121 containerd[1906]: time="2025-07-12T00:06:09.578735360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:09.579121 containerd[1906]: time="2025-07-12T00:06:09.578762440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:09.579121 containerd[1906]: time="2025-07-12T00:06:09.578783560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:09.579121 containerd[1906]: time="2025-07-12T00:06:09.578797920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:09.579121 containerd[1906]: time="2025-07-12T00:06:09.578885000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:09.579261 containerd[1906]: time="2025-07-12T00:06:09.579154640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:06:09.579337 containerd[1906]: time="2025-07-12T00:06:09.579309320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:06:09.579337 containerd[1906]: time="2025-07-12T00:06:09.579331440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:06:09.579427 containerd[1906]: time="2025-07-12T00:06:09.579407880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:06:09.579476 containerd[1906]: time="2025-07-12T00:06:09.579460200Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:06:09.604810 containerd[1906]: time="2025-07-12T00:06:09.604744000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:06:09.604810 containerd[1906]: time="2025-07-12T00:06:09.604842240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:06:09.605057 containerd[1906]: time="2025-07-12T00:06:09.604869880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:06:09.605057 containerd[1906]: time="2025-07-12T00:06:09.604895360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:06:09.605057 containerd[1906]: time="2025-07-12T00:06:09.604922400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:06:09.605157 containerd[1906]: time="2025-07-12T00:06:09.605120120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:06:09.606024 containerd[1906]: time="2025-07-12T00:06:09.605987800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:06:09.606195 containerd[1906]: time="2025-07-12T00:06:09.606167120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:06:09.606195 containerd[1906]: time="2025-07-12T00:06:09.606190440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:06:09.606252 containerd[1906]: time="2025-07-12T00:06:09.606204400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:06:09.606252 containerd[1906]: time="2025-07-12T00:06:09.606218200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:06:09.606252 containerd[1906]: time="2025-07-12T00:06:09.606232000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:06:09.606252 containerd[1906]: time="2025-07-12T00:06:09.606250600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606265880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606281040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606293720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606310600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606323520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606347440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606362000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606374120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606387400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606399560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606412520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606424280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606438320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.606540 containerd[1906]: time="2025-07-12T00:06:09.606451480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606465800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606477320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606489840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606504440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606519800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606540600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606553240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606568960Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606622120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606641200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606651880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606664160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:06:09.607021 containerd[1906]: time="2025-07-12T00:06:09.606674080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607290 containerd[1906]: time="2025-07-12T00:06:09.606686320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:06:09.607290 containerd[1906]: time="2025-07-12T00:06:09.606697640Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:06:09.607290 containerd[1906]: time="2025-07-12T00:06:09.606707440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:06:09.607346 containerd[1906]: time="2025-07-12T00:06:09.606983520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:06:09.607346 containerd[1906]: time="2025-07-12T00:06:09.607046080Z" level=info msg="Connect containerd service" Jul 12 00:06:09.607346 containerd[1906]: time="2025-07-12T00:06:09.607073840Z" level=info msg="using legacy CRI server" Jul 12 00:06:09.607346 containerd[1906]: time="2025-07-12T00:06:09.607081320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:06:09.607346 containerd[1906]: time="2025-07-12T00:06:09.607177840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.607736640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608009480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608020200Z" level=info msg="Start subscribing containerd event" Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608074320Z" level=info msg="Start recovering state" Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608049640Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608156440Z" level=info msg="Start event monitor" Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608169160Z" level=info msg="Start snapshots syncer" Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608179240Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608186520Z" level=info msg="Start streaming server" Jul 12 00:06:09.608867 containerd[1906]: time="2025-07-12T00:06:09.608250280Z" level=info msg="containerd successfully booted in 0.057537s" Jul 12 00:06:09.608369 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:06:09.615543 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:06:09.625153 systemd[1]: Startup finished in 13.170s (kernel) + 24.319s (userspace) = 37.490s. Jul 12 00:06:10.321913 login[1919]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 12 00:06:10.323447 login[1920]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:10.332543 systemd-logind[1763]: New session 1 of user core. Jul 12 00:06:10.332936 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:06:10.339343 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:06:10.349901 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:06:10.357340 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:06:10.361696 (systemd)[1943]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:06:10.738839 systemd[1943]: Queued start job for default target default.target. Jul 12 00:06:10.739570 systemd[1943]: Created slice app.slice - User Application Slice. Jul 12 00:06:10.739601 systemd[1943]: Reached target paths.target - Paths. Jul 12 00:06:10.739613 systemd[1943]: Reached target timers.target - Timers. Jul 12 00:06:10.748191 systemd[1943]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:06:10.754814 systemd[1943]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:06:10.754881 systemd[1943]: Reached target sockets.target - Sockets. Jul 12 00:06:10.754894 systemd[1943]: Reached target basic.target - Basic System. Jul 12 00:06:10.754939 systemd[1943]: Reached target default.target - Main User Target. Jul 12 00:06:10.754961 systemd[1943]: Startup finished in 387ms. Jul 12 00:06:10.755380 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:06:10.762431 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:06:11.322655 login[1919]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:11.327491 systemd-logind[1763]: New session 2 of user core. Jul 12 00:06:11.333330 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:06:12.321533 waagent[1916]: 2025-07-12T00:06:12.321438Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 12 00:06:12.327536 waagent[1916]: 2025-07-12T00:06:12.327466Z INFO Daemon Daemon OS: flatcar 4081.3.4 Jul 12 00:06:12.332350 waagent[1916]: 2025-07-12T00:06:12.332294Z INFO Daemon Daemon Python: 3.11.9 Jul 12 00:06:12.337134 waagent[1916]: 2025-07-12T00:06:12.336912Z INFO Daemon Daemon Run daemon Jul 12 00:06:12.341106 waagent[1916]: 2025-07-12T00:06:12.341049Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.4' Jul 12 00:06:12.350366 waagent[1916]: 2025-07-12T00:06:12.350299Z INFO Daemon Daemon Using waagent for provisioning Jul 12 00:06:12.355789 waagent[1916]: 2025-07-12T00:06:12.355742Z INFO Daemon Daemon Activate resource disk Jul 12 00:06:12.360526 waagent[1916]: 2025-07-12T00:06:12.360479Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 12 00:06:12.371723 waagent[1916]: 2025-07-12T00:06:12.371664Z INFO Daemon Daemon Found device: None Jul 12 00:06:12.376181 waagent[1916]: 2025-07-12T00:06:12.376133Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 12 00:06:12.384559 waagent[1916]: 2025-07-12T00:06:12.384509Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 12 00:06:12.397477 waagent[1916]: 2025-07-12T00:06:12.397421Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 12 00:06:12.403258 waagent[1916]: 2025-07-12T00:06:12.403211Z INFO Daemon Daemon Running default provisioning handler Jul 12 00:06:12.415234 waagent[1916]: 2025-07-12T00:06:12.415168Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 12 00:06:12.429458 waagent[1916]: 2025-07-12T00:06:12.429389Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 12 00:06:12.439131 waagent[1916]: 2025-07-12T00:06:12.439057Z INFO Daemon Daemon cloud-init is enabled: False Jul 12 00:06:12.444345 waagent[1916]: 2025-07-12T00:06:12.444290Z INFO Daemon Daemon Copying ovf-env.xml Jul 12 00:06:12.508206 waagent[1916]: 2025-07-12T00:06:12.507483Z INFO Daemon Daemon Successfully mounted dvd Jul 12 00:06:12.522430 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 12 00:06:12.525168 waagent[1916]: 2025-07-12T00:06:12.524275Z INFO Daemon Daemon Detect protocol endpoint Jul 12 00:06:12.529578 waagent[1916]: 2025-07-12T00:06:12.529510Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 12 00:06:12.535490 waagent[1916]: 2025-07-12T00:06:12.535426Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 12 00:06:12.542299 waagent[1916]: 2025-07-12T00:06:12.542240Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 12 00:06:12.547724 waagent[1916]: 2025-07-12T00:06:12.547668Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 12 00:06:12.552964 waagent[1916]: 2025-07-12T00:06:12.552910Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 12 00:06:12.583238 waagent[1916]: 2025-07-12T00:06:12.583126Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 12 00:06:12.590007 waagent[1916]: 2025-07-12T00:06:12.589974Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 12 00:06:12.595305 waagent[1916]: 2025-07-12T00:06:12.595254Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 12 00:06:12.935129 waagent[1916]: 2025-07-12T00:06:12.934723Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 12 00:06:12.941913 waagent[1916]: 2025-07-12T00:06:12.941838Z INFO Daemon Daemon Forcing an update of the goal state. Jul 12 00:06:12.952031 waagent[1916]: 2025-07-12T00:06:12.951967Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 12 00:06:12.972075 waagent[1916]: 2025-07-12T00:06:12.972032Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 12 00:06:12.978168 waagent[1916]: 2025-07-12T00:06:12.978122Z INFO Daemon Jul 12 00:06:12.981045 waagent[1916]: 2025-07-12T00:06:12.981003Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 702ef88f-1703-4f4e-9bab-d477745b8762 eTag: 3125753540938674281 source: Fabric] Jul 12 00:06:12.992583 waagent[1916]: 2025-07-12T00:06:12.992534Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 12 00:06:12.999566 waagent[1916]: 2025-07-12T00:06:12.999516Z INFO Daemon Jul 12 00:06:13.002441 waagent[1916]: 2025-07-12T00:06:13.002398Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 12 00:06:13.013667 waagent[1916]: 2025-07-12T00:06:13.013621Z INFO Daemon Daemon Downloading artifacts profile blob Jul 12 00:06:13.174695 waagent[1916]: 2025-07-12T00:06:13.174603Z INFO Daemon Downloaded certificate {'thumbprint': '4F0122C119BA5C33BA350F937F901BD5F518B5E5', 'hasPrivateKey': True} Jul 12 00:06:13.185209 waagent[1916]: 2025-07-12T00:06:13.185127Z INFO Daemon Downloaded certificate {'thumbprint': '218F5D8ADFBF4CC9849AB54EC70577F16AB5B6B5', 'hasPrivateKey': False} Jul 12 00:06:13.195154 waagent[1916]: 2025-07-12T00:06:13.195102Z INFO Daemon Fetch goal state completed Jul 12 00:06:13.238349 waagent[1916]: 2025-07-12T00:06:13.238305Z INFO Daemon Daemon Starting provisioning Jul 12 00:06:13.243446 waagent[1916]: 2025-07-12T00:06:13.243385Z INFO Daemon Daemon Handle ovf-env.xml. Jul 12 00:06:13.248184 waagent[1916]: 2025-07-12T00:06:13.248133Z INFO Daemon Daemon Set hostname [ci-4081.3.4-n-0fb9ec6aad] Jul 12 00:06:13.270108 waagent[1916]: 2025-07-12T00:06:13.267841Z INFO Daemon Daemon Publish hostname [ci-4081.3.4-n-0fb9ec6aad] Jul 12 00:06:13.274721 waagent[1916]: 2025-07-12T00:06:13.274656Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 12 00:06:13.281456 waagent[1916]: 2025-07-12T00:06:13.281401Z INFO Daemon Daemon Primary interface is [eth0] Jul 12 00:06:13.325074 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:13.325081 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:13.325143 systemd-networkd[1392]: eth0: DHCP lease lost Jul 12 00:06:13.327126 waagent[1916]: 2025-07-12T00:06:13.326386Z INFO Daemon Daemon Create user account if not exists Jul 12 00:06:13.332287 waagent[1916]: 2025-07-12T00:06:13.332056Z INFO Daemon Daemon User core already exists, skip useradd Jul 12 00:06:13.332169 systemd-networkd[1392]: eth0: DHCPv6 lease lost Jul 12 00:06:13.337979 waagent[1916]: 2025-07-12T00:06:13.337920Z INFO Daemon Daemon Configure sudoer Jul 12 00:06:13.343304 waagent[1916]: 2025-07-12T00:06:13.343244Z INFO Daemon Daemon Configure sshd Jul 12 00:06:13.348161 waagent[1916]: 2025-07-12T00:06:13.348102Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 12 00:06:13.361806 waagent[1916]: 2025-07-12T00:06:13.361446Z INFO Daemon Daemon Deploy ssh public key. Jul 12 00:06:13.375198 systemd-networkd[1392]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 12 00:06:14.486363 waagent[1916]: 2025-07-12T00:06:14.481364Z INFO Daemon Daemon Provisioning complete Jul 12 00:06:14.500094 waagent[1916]: 2025-07-12T00:06:14.500040Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 12 00:06:14.509034 waagent[1916]: 2025-07-12T00:06:14.508966Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 12 00:06:14.518570 waagent[1916]: 2025-07-12T00:06:14.518511Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 12 00:06:14.655198 waagent[2001]: 2025-07-12T00:06:14.654432Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 12 00:06:14.655198 waagent[2001]: 2025-07-12T00:06:14.654585Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.4 Jul 12 00:06:14.655198 waagent[2001]: 2025-07-12T00:06:14.654638Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 12 00:06:14.715121 waagent[2001]: 2025-07-12T00:06:14.714650Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 12 00:06:14.715121 waagent[2001]: 2025-07-12T00:06:14.714894Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:06:14.715121 waagent[2001]: 2025-07-12T00:06:14.714954Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:06:14.723547 waagent[2001]: 2025-07-12T00:06:14.723473Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 12 00:06:14.730158 waagent[2001]: 2025-07-12T00:06:14.730114Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 12 00:06:14.730693 waagent[2001]: 2025-07-12T00:06:14.730649Z INFO ExtHandler Jul 12 00:06:14.730763 waagent[2001]: 2025-07-12T00:06:14.730733Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 255e9058-fb52-4c44-83d7-3f81307564d3 eTag: 3125753540938674281 source: Fabric] Jul 12 00:06:14.731070 waagent[2001]: 2025-07-12T00:06:14.731028Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 12 00:06:14.731669 waagent[2001]: 2025-07-12T00:06:14.731620Z INFO ExtHandler Jul 12 00:06:14.731728 waagent[2001]: 2025-07-12T00:06:14.731700Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 12 00:06:14.736083 waagent[2001]: 2025-07-12T00:06:14.736049Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 12 00:06:14.816027 waagent[2001]: 2025-07-12T00:06:14.815881Z INFO ExtHandler Downloaded certificate {'thumbprint': '4F0122C119BA5C33BA350F937F901BD5F518B5E5', 'hasPrivateKey': True} Jul 12 00:06:14.816454 waagent[2001]: 2025-07-12T00:06:14.816408Z INFO ExtHandler Downloaded certificate {'thumbprint': '218F5D8ADFBF4CC9849AB54EC70577F16AB5B6B5', 'hasPrivateKey': False} Jul 12 00:06:14.816864 waagent[2001]: 2025-07-12T00:06:14.816823Z INFO ExtHandler Fetch goal state completed Jul 12 00:06:14.832937 waagent[2001]: 2025-07-12T00:06:14.832872Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2001 Jul 12 00:06:14.833090 waagent[2001]: 2025-07-12T00:06:14.833053Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 12 00:06:14.834747 waagent[2001]: 2025-07-12T00:06:14.834698Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.4', '', 'Flatcar Container Linux by Kinvolk'] Jul 12 00:06:14.835157 waagent[2001]: 2025-07-12T00:06:14.835115Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 12 00:06:14.866216 waagent[2001]: 2025-07-12T00:06:14.866174Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 12 00:06:14.866428 waagent[2001]: 2025-07-12T00:06:14.866389Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 12 00:06:14.872781 waagent[2001]: 2025-07-12T00:06:14.872300Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 12 00:06:14.878962 systemd[1]: Reloading requested from client PID 2016 ('systemctl') (unit waagent.service)... Jul 12 00:06:14.878978 systemd[1]: Reloading... Jul 12 00:06:14.958116 zram_generator::config[2050]: No configuration found. Jul 12 00:06:15.066001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:15.144979 systemd[1]: Reloading finished in 265 ms. Jul 12 00:06:15.165037 waagent[2001]: 2025-07-12T00:06:15.163295Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 12 00:06:15.170655 systemd[1]: Reloading requested from client PID 2111 ('systemctl') (unit waagent.service)... Jul 12 00:06:15.170670 systemd[1]: Reloading... Jul 12 00:06:15.250117 zram_generator::config[2148]: No configuration found. Jul 12 00:06:15.357736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:06:15.432025 systemd[1]: Reloading finished in 261 ms. Jul 12 00:06:15.450185 waagent[2001]: 2025-07-12T00:06:15.449937Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 12 00:06:15.450185 waagent[2001]: 2025-07-12T00:06:15.450152Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 12 00:06:15.746362 waagent[2001]: 2025-07-12T00:06:15.745049Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 12 00:06:15.746362 waagent[2001]: 2025-07-12T00:06:15.745721Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 12 00:06:15.746742 waagent[2001]: 2025-07-12T00:06:15.746602Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:06:15.746742 waagent[2001]: 2025-07-12T00:06:15.746692Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:06:15.746931 waagent[2001]: 2025-07-12T00:06:15.746885Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 12 00:06:15.747079 waagent[2001]: 2025-07-12T00:06:15.747011Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 12 00:06:15.747250 waagent[2001]: 2025-07-12T00:06:15.747200Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 12 00:06:15.747250 waagent[2001]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 12 00:06:15.747250 waagent[2001]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 12 00:06:15.747250 waagent[2001]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 12 00:06:15.747250 waagent[2001]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:06:15.747250 waagent[2001]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:06:15.747250 waagent[2001]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 12 00:06:15.747930 waagent[2001]: 2025-07-12T00:06:15.747885Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 12 00:06:15.748045 waagent[2001]: 2025-07-12T00:06:15.748011Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 12 00:06:15.748468 waagent[2001]: 2025-07-12T00:06:15.748404Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 12 00:06:15.748613 waagent[2001]: 2025-07-12T00:06:15.748561Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 12 00:06:15.748747 waagent[2001]: 2025-07-12T00:06:15.748713Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 12 00:06:15.749172 waagent[2001]: 2025-07-12T00:06:15.749111Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 12 00:06:15.749351 waagent[2001]: 2025-07-12T00:06:15.749298Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 12 00:06:15.749466 waagent[2001]: 2025-07-12T00:06:15.749419Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 12 00:06:15.750242 waagent[2001]: 2025-07-12T00:06:15.750185Z INFO EnvHandler ExtHandler Configure routes Jul 12 00:06:15.751239 waagent[2001]: 2025-07-12T00:06:15.751181Z INFO EnvHandler ExtHandler Gateway:None Jul 12 00:06:15.751606 waagent[2001]: 2025-07-12T00:06:15.751558Z INFO EnvHandler ExtHandler Routes:None Jul 12 00:06:15.757058 waagent[2001]: 2025-07-12T00:06:15.756996Z INFO ExtHandler ExtHandler Jul 12 00:06:15.757593 waagent[2001]: 2025-07-12T00:06:15.757537Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5af2b351-6db0-4bd9-adb4-3d34850256e4 correlation e4da7e9f-9b24-4967-b147-e30cf1b1058d created: 2025-07-12T00:04:50.482857Z] Jul 12 00:06:15.758820 waagent[2001]: 2025-07-12T00:06:15.758757Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 12 00:06:15.759572 waagent[2001]: 2025-07-12T00:06:15.759522Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jul 12 00:06:15.797667 waagent[2001]: 2025-07-12T00:06:15.797598Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2425D4FB-682E-4091-BC4B-339552546BEA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 12 00:06:15.842638 waagent[2001]: 2025-07-12T00:06:15.842565Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 12 00:06:15.842638 waagent[2001]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:15.842638 waagent[2001]: pkts bytes target prot opt in out source destination Jul 12 00:06:15.842638 waagent[2001]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:15.842638 waagent[2001]: pkts bytes target prot opt in out source destination Jul 12 00:06:15.842638 waagent[2001]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:15.842638 waagent[2001]: pkts bytes target prot opt in out source destination Jul 12 00:06:15.842638 waagent[2001]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 12 00:06:15.842638 waagent[2001]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 12 00:06:15.842638 waagent[2001]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 12 00:06:15.846118 waagent[2001]: 2025-07-12T00:06:15.845793Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 12 00:06:15.846118 waagent[2001]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:15.846118 waagent[2001]: pkts bytes target prot opt in out source destination Jul 12 00:06:15.846118 waagent[2001]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:15.846118 waagent[2001]: pkts bytes target prot opt in out source destination Jul 12 00:06:15.846118 waagent[2001]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 12 00:06:15.846118 waagent[2001]: pkts bytes target prot opt in out source destination Jul 12 00:06:15.846118 waagent[2001]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 12 00:06:15.846118 waagent[2001]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 12 00:06:15.846118 waagent[2001]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 12 00:06:15.846118 waagent[2001]: 2025-07-12T00:06:15.846060Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 12 00:06:16.538514 waagent[2001]: 2025-07-12T00:06:16.538431Z INFO MonitorHandler ExtHandler Network interfaces: Jul 12 00:06:16.538514 waagent[2001]: Executing ['ip', '-a', '-o', 'link']: Jul 12 00:06:16.538514 waagent[2001]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 12 00:06:16.538514 waagent[2001]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:ff:80:62 brd ff:ff:ff:ff:ff:ff Jul 12 00:06:16.538514 waagent[2001]: 3: enP38952s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:ff:80:62 brd ff:ff:ff:ff:ff:ff\ altname enP38952p0s2 Jul 12 00:06:16.538514 waagent[2001]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 12 00:06:16.538514 waagent[2001]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 12 00:06:16.538514 waagent[2001]: 2: eth0 inet 10.200.20.44/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 12 00:06:16.538514 waagent[2001]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 12 00:06:16.538514 waagent[2001]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 12 00:06:16.538514 waagent[2001]: 2: eth0 inet6 fe80::20d:3aff:feff:8062/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 12 00:06:16.538514 waagent[2001]: 3: enP38952s1 inet6 fe80::20d:3aff:feff:8062/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 12 00:06:17.682979 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:06:17.690303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:17.806299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:17.814437 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:17.939971 kubelet[2247]: E0712 00:06:17.939831 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:17.943591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:17.943768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:28.183199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:06:28.191267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:28.296510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:28.299510 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:28.430073 kubelet[2268]: E0712 00:06:28.430019 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:28.432395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:28.432576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:30.309850 chronyd[1797]: Selected source PHC0 Jul 12 00:06:37.310188 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:06:37.315321 systemd[1]: Started sshd@0-10.200.20.44:22-10.200.16.10:39514.service - OpenSSH per-connection server daemon (10.200.16.10:39514). Jul 12 00:06:37.834819 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 39514 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:37.836163 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:37.840479 systemd-logind[1763]: New session 3 of user core. Jul 12 00:06:37.847385 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:06:38.196645 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 12 00:06:38.246317 systemd[1]: Started sshd@1-10.200.20.44:22-10.200.16.10:39530.service - OpenSSH per-connection server daemon (10.200.16.10:39530). Jul 12 00:06:38.432942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:06:38.441282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:38.666761 sshd[2281]: Accepted publickey for core from 10.200.16.10 port 39530 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:38.668131 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:38.673385 systemd-logind[1763]: New session 4 of user core. Jul 12 00:06:38.680464 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:06:38.785987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:38.788911 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:38.823938 kubelet[2297]: E0712 00:06:38.823869 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:38.828297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:38.828459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:38.995299 sshd[2281]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:38.998434 systemd[1]: sshd@1-10.200.20.44:22-10.200.16.10:39530.service: Deactivated successfully. Jul 12 00:06:39.001287 systemd-logind[1763]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:06:39.002068 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:06:39.003134 systemd-logind[1763]: Removed session 4. Jul 12 00:06:39.074327 systemd[1]: Started sshd@2-10.200.20.44:22-10.200.16.10:39538.service - OpenSSH per-connection server daemon (10.200.16.10:39538). Jul 12 00:06:39.516670 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 39538 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:39.518135 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:39.522242 systemd-logind[1763]: New session 5 of user core. Jul 12 00:06:39.530449 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:06:39.859193 sshd[2309]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:39.862464 systemd-logind[1763]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:06:39.863412 systemd[1]: sshd@2-10.200.20.44:22-10.200.16.10:39538.service: Deactivated successfully. Jul 12 00:06:39.867129 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:06:39.868723 systemd-logind[1763]: Removed session 5. Jul 12 00:06:39.931328 systemd[1]: Started sshd@3-10.200.20.44:22-10.200.16.10:42750.service - OpenSSH per-connection server daemon (10.200.16.10:42750). Jul 12 00:06:40.352176 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 42750 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:40.353519 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:40.358152 systemd-logind[1763]: New session 6 of user core. Jul 12 00:06:40.363402 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:06:40.683717 sshd[2317]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:40.686692 systemd-logind[1763]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:06:40.688026 systemd[1]: sshd@3-10.200.20.44:22-10.200.16.10:42750.service: Deactivated successfully. Jul 12 00:06:40.689569 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:06:40.691333 systemd-logind[1763]: Removed session 6. Jul 12 00:06:40.762497 systemd[1]: Started sshd@4-10.200.20.44:22-10.200.16.10:42764.service - OpenSSH per-connection server daemon (10.200.16.10:42764). Jul 12 00:06:41.185282 sshd[2325]: Accepted publickey for core from 10.200.16.10 port 42764 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:41.186607 sshd[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:41.190442 systemd-logind[1763]: New session 7 of user core. Jul 12 00:06:41.197389 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:06:41.539837 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:06:41.540133 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:41.566422 sudo[2329]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:41.653385 sshd[2325]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:41.657494 systemd-logind[1763]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:06:41.658457 systemd[1]: sshd@4-10.200.20.44:22-10.200.16.10:42764.service: Deactivated successfully. Jul 12 00:06:41.660880 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:06:41.661838 systemd-logind[1763]: Removed session 7. Jul 12 00:06:41.729352 systemd[1]: Started sshd@5-10.200.20.44:22-10.200.16.10:42778.service - OpenSSH per-connection server daemon (10.200.16.10:42778). Jul 12 00:06:42.150400 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 42778 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:42.151757 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:42.156701 systemd-logind[1763]: New session 8 of user core. Jul 12 00:06:42.162469 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:06:42.395818 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:06:42.396120 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:42.399372 sudo[2339]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:42.404007 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:06:42.404386 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:42.417328 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:42.418707 auditctl[2342]: No rules Jul 12 00:06:42.419172 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:06:42.419431 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:06:42.423038 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:06:42.446413 augenrules[2361]: No rules Jul 12 00:06:42.448353 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:06:42.450403 sudo[2338]: pam_unix(sudo:session): session closed for user root Jul 12 00:06:42.535013 sshd[2334]: pam_unix(sshd:session): session closed for user core Jul 12 00:06:42.538657 systemd[1]: sshd@5-10.200.20.44:22-10.200.16.10:42778.service: Deactivated successfully. Jul 12 00:06:42.541440 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:06:42.542332 systemd-logind[1763]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:06:42.543165 systemd-logind[1763]: Removed session 8. Jul 12 00:06:42.623338 systemd[1]: Started sshd@6-10.200.20.44:22-10.200.16.10:42786.service - OpenSSH per-connection server daemon (10.200.16.10:42786). Jul 12 00:06:43.069649 sshd[2370]: Accepted publickey for core from 10.200.16.10 port 42786 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:06:43.070981 sshd[2370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:06:43.075538 systemd-logind[1763]: New session 9 of user core. Jul 12 00:06:43.082422 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:06:43.327368 sudo[2374]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:06:43.327630 sudo[2374]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:06:44.018370 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:06:44.018580 (dockerd)[2389]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:06:44.451834 dockerd[2389]: time="2025-07-12T00:06:44.451531064Z" level=info msg="Starting up" Jul 12 00:06:44.709934 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport72610363-merged.mount: Deactivated successfully. Jul 12 00:06:44.805897 systemd[1]: var-lib-docker-metacopy\x2dcheck2292189967-merged.mount: Deactivated successfully. Jul 12 00:06:44.827123 dockerd[2389]: time="2025-07-12T00:06:44.826705247Z" level=info msg="Loading containers: start." Jul 12 00:06:44.976111 kernel: Initializing XFRM netlink socket Jul 12 00:06:45.095828 systemd-networkd[1392]: docker0: Link UP Jul 12 00:06:45.123425 dockerd[2389]: time="2025-07-12T00:06:45.123275539Z" level=info msg="Loading containers: done." Jul 12 00:06:45.151331 dockerd[2389]: time="2025-07-12T00:06:45.151283289Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:06:45.151550 dockerd[2389]: time="2025-07-12T00:06:45.151396969Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:06:45.151550 dockerd[2389]: time="2025-07-12T00:06:45.151533329Z" level=info msg="Daemon has completed initialization" Jul 12 00:06:45.225126 dockerd[2389]: time="2025-07-12T00:06:45.224997542Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:06:45.225244 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:06:46.188542 containerd[1906]: time="2025-07-12T00:06:46.188234671Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:06:47.208848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998103148.mount: Deactivated successfully. Jul 12 00:06:48.637986 containerd[1906]: time="2025-07-12T00:06:48.637934444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:48.644595 containerd[1906]: time="2025-07-12T00:06:48.644556799Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 12 00:06:48.649301 containerd[1906]: time="2025-07-12T00:06:48.649204275Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:48.655471 containerd[1906]: time="2025-07-12T00:06:48.655402510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:48.656799 containerd[1906]: time="2025-07-12T00:06:48.656603989Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.468329558s" Jul 12 00:06:48.656799 containerd[1906]: time="2025-07-12T00:06:48.656658149Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:06:48.658070 containerd[1906]: time="2025-07-12T00:06:48.657980268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:06:48.932955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 12 00:06:48.942335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:49.046275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:49.051067 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:49.086982 kubelet[2590]: E0712 00:06:49.086912 2590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:49.089738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:49.089899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:06:50.351115 containerd[1906]: time="2025-07-12T00:06:50.351054885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:50.357185 containerd[1906]: time="2025-07-12T00:06:50.356939000Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 12 00:06:50.366622 containerd[1906]: time="2025-07-12T00:06:50.366589673Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:50.379294 containerd[1906]: time="2025-07-12T00:06:50.379248943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:50.380598 containerd[1906]: time="2025-07-12T00:06:50.380450862Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.722436274s" Jul 12 00:06:50.380598 containerd[1906]: time="2025-07-12T00:06:50.380484782Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:06:50.381183 containerd[1906]: time="2025-07-12T00:06:50.381115702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:06:51.452126 containerd[1906]: time="2025-07-12T00:06:51.451865838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:51.455740 containerd[1906]: time="2025-07-12T00:06:51.455703315Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 12 00:06:51.461208 containerd[1906]: time="2025-07-12T00:06:51.461164510Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:51.470349 containerd[1906]: time="2025-07-12T00:06:51.470268543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:51.471551 containerd[1906]: time="2025-07-12T00:06:51.471426463Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.090128601s" Jul 12 00:06:51.471551 containerd[1906]: time="2025-07-12T00:06:51.471465543Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:06:51.472242 containerd[1906]: time="2025-07-12T00:06:51.471987062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:06:52.142192 update_engine[1765]: I20250712 00:06:52.142122 1765 update_attempter.cc:509] Updating boot flags... Jul 12 00:06:52.194161 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2618) Jul 12 00:06:52.661938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835488380.mount: Deactivated successfully. Jul 12 00:06:53.005499 containerd[1906]: time="2025-07-12T00:06:53.005447508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:53.009957 containerd[1906]: time="2025-07-12T00:06:53.009919426Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 12 00:06:53.016077 containerd[1906]: time="2025-07-12T00:06:53.016051144Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:53.021687 containerd[1906]: time="2025-07-12T00:06:53.021651381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:53.022222 containerd[1906]: time="2025-07-12T00:06:53.022184661Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.550163439s" Jul 12 00:06:53.022283 containerd[1906]: time="2025-07-12T00:06:53.022222181Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:06:53.022918 containerd[1906]: time="2025-07-12T00:06:53.022636101Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:06:53.728649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120506895.mount: Deactivated successfully. Jul 12 00:06:55.276195 containerd[1906]: time="2025-07-12T00:06:55.276138402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:55.279226 containerd[1906]: time="2025-07-12T00:06:55.279185241Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 12 00:06:55.283706 containerd[1906]: time="2025-07-12T00:06:55.283651279Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:55.289215 containerd[1906]: time="2025-07-12T00:06:55.289151836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:55.290926 containerd[1906]: time="2025-07-12T00:06:55.290583156Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.267914415s" Jul 12 00:06:55.290926 containerd[1906]: time="2025-07-12T00:06:55.290625396Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:06:55.291490 containerd[1906]: time="2025-07-12T00:06:55.291459315Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:06:55.955547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343705735.mount: Deactivated successfully. Jul 12 00:06:55.989571 containerd[1906]: time="2025-07-12T00:06:55.989517529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:55.993204 containerd[1906]: time="2025-07-12T00:06:55.993135128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 12 00:06:56.002451 containerd[1906]: time="2025-07-12T00:06:56.002391563Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:56.008933 containerd[1906]: time="2025-07-12T00:06:56.008874161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:06:56.010038 containerd[1906]: time="2025-07-12T00:06:56.009658720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 718.077405ms" Jul 12 00:06:56.010038 containerd[1906]: time="2025-07-12T00:06:56.009694920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:06:56.010222 containerd[1906]: time="2025-07-12T00:06:56.010183120Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:06:56.776067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573511660.mount: Deactivated successfully. Jul 12 00:06:59.183019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 12 00:06:59.188272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:06:59.289275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:06:59.293397 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:06:59.416400 kubelet[2746]: E0712 00:06:59.416334 2746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:06:59.418755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:06:59.418943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:00.757952 containerd[1906]: time="2025-07-12T00:07:00.757893598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:00.763001 containerd[1906]: time="2025-07-12T00:07:00.762948475Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 12 00:07:00.772478 containerd[1906]: time="2025-07-12T00:07:00.772420671Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:00.781812 containerd[1906]: time="2025-07-12T00:07:00.781751147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:00.783396 containerd[1906]: time="2025-07-12T00:07:00.782972427Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.772764427s" Jul 12 00:07:00.783396 containerd[1906]: time="2025-07-12T00:07:00.783009707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:07:06.630549 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:06.636310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:06.662414 systemd[1]: Reloading requested from client PID 2812 ('systemctl') (unit session-9.scope)... Jul 12 00:07:06.662433 systemd[1]: Reloading... Jul 12 00:07:06.747134 zram_generator::config[2855]: No configuration found. Jul 12 00:07:06.854556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:06.931528 systemd[1]: Reloading finished in 268 ms. Jul 12 00:07:06.964211 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:07:06.964331 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:07:06.964655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:06.972374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:07.153258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:07.169451 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:07.277538 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:07.277538 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:07.277538 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:07.277907 kubelet[2926]: I0712 00:07:07.277632 2926 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:07.974136 kubelet[2926]: I0712 00:07:07.973885 2926 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:07:07.974136 kubelet[2926]: I0712 00:07:07.973916 2926 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:07.974393 kubelet[2926]: I0712 00:07:07.974381 2926 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:07:07.991080 kubelet[2926]: E0712 00:07:07.991024 2926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:07.991260 kubelet[2926]: I0712 00:07:07.991246 2926 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:08.000116 kubelet[2926]: E0712 00:07:08.000048 2926 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:08.000116 kubelet[2926]: I0712 00:07:08.000103 2926 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:08.003998 kubelet[2926]: I0712 00:07:08.003977 2926 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:08.004846 kubelet[2926]: I0712 00:07:08.004826 2926 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:07:08.004975 kubelet[2926]: I0712 00:07:08.004950 2926 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:08.005160 kubelet[2926]: I0712 00:07:08.004976 2926 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-n-0fb9ec6aad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:07:08.005249 kubelet[2926]: I0712 00:07:08.005169 2926 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:08.005249 kubelet[2926]: I0712 00:07:08.005180 2926 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:07:08.005295 kubelet[2926]: I0712 00:07:08.005291 2926 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:08.007371 kubelet[2926]: I0712 00:07:08.007159 2926 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:07:08.007371 kubelet[2926]: I0712 00:07:08.007189 2926 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:08.007371 kubelet[2926]: I0712 00:07:08.007213 2926 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:07:08.007371 kubelet[2926]: I0712 00:07:08.007232 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:08.010983 kubelet[2926]: W0712 00:07:08.010666 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-0fb9ec6aad&limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 12 00:07:08.010983 kubelet[2926]: E0712 00:07:08.010726 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-0fb9ec6aad&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:08.011777 kubelet[2926]: W0712 00:07:08.011721 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 12 00:07:08.011777 kubelet[2926]: E0712 00:07:08.011776 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:08.012954 kubelet[2926]: I0712 00:07:08.011867 2926 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:08.012954 kubelet[2926]: I0712 00:07:08.012341 2926 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:08.012954 kubelet[2926]: W0712 00:07:08.012384 2926 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:07:08.012954 kubelet[2926]: I0712 00:07:08.012930 2926 server.go:1274] "Started kubelet" Jul 12 00:07:08.017975 kubelet[2926]: E0712 00:07:08.016956 2926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.44:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.4-n-0fb9ec6aad.1851584f6325b66e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.4-n-0fb9ec6aad,UID:ci-4081.3.4-n-0fb9ec6aad,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.4-n-0fb9ec6aad,},FirstTimestamp:2025-07-12 00:07:08.01291019 +0000 UTC m=+0.840507143,LastTimestamp:2025-07-12 00:07:08.01291019 +0000 UTC m=+0.840507143,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.4-n-0fb9ec6aad,}" Jul 12 00:07:08.018146 kubelet[2926]: I0712 00:07:08.017972 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:08.018430 kubelet[2926]: I0712 00:07:08.018402 2926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:08.019429 kubelet[2926]: I0712 00:07:08.018054 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:08.019741 kubelet[2926]: I0712 00:07:08.019593 2926 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:08.020944 kubelet[2926]: I0712 00:07:08.018011 2926 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:08.021781 kubelet[2926]: I0712 00:07:08.021739 2926 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:07:08.024030 kubelet[2926]: I0712 00:07:08.024015 2926 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:07:08.024436 kubelet[2926]: E0712 00:07:08.024395 2926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-0fb9ec6aad\" not found" Jul 12 00:07:08.024822 kubelet[2926]: I0712 00:07:08.024790 2926 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:08.024918 kubelet[2926]: I0712 00:07:08.024895 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:08.025225 kubelet[2926]: E0712 00:07:08.025125 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-0fb9ec6aad?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="200ms" Jul 12 00:07:08.026048 kubelet[2926]: I0712 00:07:08.026019 2926 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:08.027184 kubelet[2926]: I0712 00:07:08.027168 2926 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:07:08.027400 kubelet[2926]: E0712 00:07:08.024042 2926 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:07:08.027400 kubelet[2926]: I0712 00:07:08.027304 2926 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:08.030267 kubelet[2926]: W0712 00:07:08.030215 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 12 00:07:08.032100 kubelet[2926]: E0712 00:07:08.031361 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:08.042208 kubelet[2926]: I0712 00:07:08.042169 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:08.043520 kubelet[2926]: I0712 00:07:08.043225 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:08.043520 kubelet[2926]: I0712 00:07:08.043246 2926 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:07:08.043520 kubelet[2926]: I0712 00:07:08.043264 2926 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:07:08.043520 kubelet[2926]: E0712 00:07:08.043299 2926 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:08.045151 kubelet[2926]: W0712 00:07:08.045113 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 12 00:07:08.045291 kubelet[2926]: E0712 00:07:08.045263 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:08.124818 kubelet[2926]: E0712 00:07:08.124788 2926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-0fb9ec6aad\" not found" Jul 12 00:07:08.144157 kubelet[2926]: E0712 00:07:08.144135 2926 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:07:08.153001 kubelet[2926]: I0712 00:07:08.152968 2926 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:07:08.153001 kubelet[2926]: I0712 00:07:08.152985 2926 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:08.153001 kubelet[2926]: I0712 00:07:08.153006 2926 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:08.159289 kubelet[2926]: I0712 00:07:08.159261 2926 policy_none.go:49] "None policy: Start" Jul 12 00:07:08.160501 kubelet[2926]: I0712 00:07:08.160243 2926 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:07:08.160501 kubelet[2926]: I0712 00:07:08.160275 2926 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:08.172464 kubelet[2926]: I0712 00:07:08.172442 2926 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:08.173442 kubelet[2926]: I0712 00:07:08.172748 2926 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:08.173442 kubelet[2926]: I0712 00:07:08.172762 2926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:08.173442 kubelet[2926]: I0712 00:07:08.173013 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:08.175128 kubelet[2926]: E0712 00:07:08.175097 2926 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.4-n-0fb9ec6aad\" not found" Jul 12 00:07:08.225630 kubelet[2926]: E0712 00:07:08.225534 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-0fb9ec6aad?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="400ms" Jul 12 00:07:08.275857 kubelet[2926]: I0712 00:07:08.275819 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.276255 kubelet[2926]: E0712 00:07:08.276228 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430009 kubelet[2926]: I0712 00:07:08.429789 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430009 kubelet[2926]: I0712 00:07:08.429821 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430009 kubelet[2926]: I0712 00:07:08.429844 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5514ee06ac861f524c945295bf8bf56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"b5514ee06ac861f524c945295bf8bf56\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430009 kubelet[2926]: I0712 00:07:08.429863 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430009 kubelet[2926]: I0712 00:07:08.429879 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430492 kubelet[2926]: I0712 00:07:08.429895 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430492 kubelet[2926]: I0712 00:07:08.429912 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec1b331a5afdce8e7f0b6fa214cbe7bd-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"ec1b331a5afdce8e7f0b6fa214cbe7bd\") " pod="kube-system/kube-scheduler-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430492 kubelet[2926]: I0712 00:07:08.429925 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5514ee06ac861f524c945295bf8bf56-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"b5514ee06ac861f524c945295bf8bf56\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.430492 kubelet[2926]: I0712 00:07:08.429940 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5514ee06ac861f524c945295bf8bf56-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"b5514ee06ac861f524c945295bf8bf56\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.478557 kubelet[2926]: I0712 00:07:08.478448 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.478798 kubelet[2926]: E0712 00:07:08.478763 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.626033 kubelet[2926]: E0712 00:07:08.625967 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-0fb9ec6aad?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="800ms" Jul 12 00:07:08.652571 containerd[1906]: time="2025-07-12T00:07:08.652352702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-n-0fb9ec6aad,Uid:b5514ee06ac861f524c945295bf8bf56,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:08.652571 containerd[1906]: time="2025-07-12T00:07:08.652353702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-n-0fb9ec6aad,Uid:ec1b331a5afdce8e7f0b6fa214cbe7bd,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:08.657149 containerd[1906]: time="2025-07-12T00:07:08.657112180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad,Uid:204cceb01f392d93f4eafd98a4837eaf,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:08.855612 kubelet[2926]: W0712 00:07:08.855467 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 12 00:07:08.855612 kubelet[2926]: E0712 00:07:08.855540 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:08.881430 kubelet[2926]: I0712 00:07:08.881388 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.881791 kubelet[2926]: E0712 00:07:08.881747 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:08.976652 kubelet[2926]: W0712 00:07:08.976557 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-0fb9ec6aad&limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 12 00:07:08.976652 kubelet[2926]: E0712 00:07:08.976620 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.4-n-0fb9ec6aad&limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:09.171917 kubelet[2926]: W0712 00:07:09.171808 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 12 00:07:09.171917 kubelet[2926]: E0712 00:07:09.171855 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:09.403904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697060668.mount: Deactivated successfully. Jul 12 00:07:09.427402 kubelet[2926]: E0712 00:07:09.427292 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.4-n-0fb9ec6aad?timeout=10s\": dial tcp 10.200.20.44:6443: connect: connection refused" interval="1.6s" Jul 12 00:07:09.448947 containerd[1906]: time="2025-07-12T00:07:09.448130264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:09.452124 containerd[1906]: time="2025-07-12T00:07:09.451543902Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:09.456989 containerd[1906]: time="2025-07-12T00:07:09.456949540Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 12 00:07:09.462110 containerd[1906]: time="2025-07-12T00:07:09.462020177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:09.470206 containerd[1906]: time="2025-07-12T00:07:09.470159574Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:09.479122 containerd[1906]: time="2025-07-12T00:07:09.477509290Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:09.484364 containerd[1906]: time="2025-07-12T00:07:09.484301047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:09.490065 containerd[1906]: time="2025-07-12T00:07:09.490001125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:09.491332 containerd[1906]: time="2025-07-12T00:07:09.491081724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 838.604502ms" Jul 12 00:07:09.492970 containerd[1906]: time="2025-07-12T00:07:09.492936283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 840.479181ms" Jul 12 00:07:09.498598 containerd[1906]: time="2025-07-12T00:07:09.498560601Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 841.375301ms" Jul 12 00:07:09.586126 kubelet[2926]: W0712 00:07:09.586040 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.44:6443: connect: connection refused Jul 12 00:07:09.586126 kubelet[2926]: E0712 00:07:09.586084 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:09.683990 kubelet[2926]: I0712 00:07:09.683657 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:09.683990 kubelet[2926]: E0712 00:07:09.683932 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.44:6443/api/v1/nodes\": dial tcp 10.200.20.44:6443: connect: connection refused" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:10.095048 containerd[1906]: time="2025-07-12T00:07:10.094897332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:10.095705 containerd[1906]: time="2025-07-12T00:07:10.095263612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:10.095705 containerd[1906]: time="2025-07-12T00:07:10.095589612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:10.095813 containerd[1906]: time="2025-07-12T00:07:10.095692332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:10.096370 containerd[1906]: time="2025-07-12T00:07:10.096300132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:10.096553 containerd[1906]: time="2025-07-12T00:07:10.096526092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:10.096657 containerd[1906]: time="2025-07-12T00:07:10.096637172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:10.097269 containerd[1906]: time="2025-07-12T00:07:10.097227211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:10.100572 containerd[1906]: time="2025-07-12T00:07:10.100347610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:10.100718 containerd[1906]: time="2025-07-12T00:07:10.100558930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:10.100822 containerd[1906]: time="2025-07-12T00:07:10.100774170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:10.101271 containerd[1906]: time="2025-07-12T00:07:10.101115650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:10.107235 kubelet[2926]: E0712 00:07:10.107202 2926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.44:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:10.161741 containerd[1906]: time="2025-07-12T00:07:10.161697982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad,Uid:204cceb01f392d93f4eafd98a4837eaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f6002c94cd1caf5a26ae282f65e985d1e08a4cbeebe63765a3ab77e82f2ba32\"" Jul 12 00:07:10.163517 containerd[1906]: time="2025-07-12T00:07:10.163408822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.4-n-0fb9ec6aad,Uid:ec1b331a5afdce8e7f0b6fa214cbe7bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8729c8a5a50463215391b86b357591501530a597e01ae56afbb35882c05f04f8\"" Jul 12 00:07:10.172022 containerd[1906]: time="2025-07-12T00:07:10.171973858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.4-n-0fb9ec6aad,Uid:b5514ee06ac861f524c945295bf8bf56,Namespace:kube-system,Attempt:0,} returns sandbox id \"f965edac8bd00318e7496f3a878747bfcfe5c5e97b95cad2cd8b97e26baa6127\"" Jul 12 00:07:10.173166 containerd[1906]: time="2025-07-12T00:07:10.173046297Z" level=info msg="CreateContainer within sandbox \"7f6002c94cd1caf5a26ae282f65e985d1e08a4cbeebe63765a3ab77e82f2ba32\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:07:10.174513 containerd[1906]: time="2025-07-12T00:07:10.174484657Z" level=info msg="CreateContainer within sandbox \"8729c8a5a50463215391b86b357591501530a597e01ae56afbb35882c05f04f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:07:10.175056 containerd[1906]: time="2025-07-12T00:07:10.175022496Z" level=info msg="CreateContainer within sandbox \"f965edac8bd00318e7496f3a878747bfcfe5c5e97b95cad2cd8b97e26baa6127\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:07:10.278705 containerd[1906]: time="2025-07-12T00:07:10.278657090Z" level=info msg="CreateContainer within sandbox \"7f6002c94cd1caf5a26ae282f65e985d1e08a4cbeebe63765a3ab77e82f2ba32\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bdd4cb92e99e091eac6b4b093e598607fb4ce5d6f621556794ff5078a624c848\"" Jul 12 00:07:10.279305 containerd[1906]: time="2025-07-12T00:07:10.279281529Z" level=info msg="StartContainer for \"bdd4cb92e99e091eac6b4b093e598607fb4ce5d6f621556794ff5078a624c848\"" Jul 12 00:07:10.286975 containerd[1906]: time="2025-07-12T00:07:10.286866646Z" level=info msg="CreateContainer within sandbox \"8729c8a5a50463215391b86b357591501530a597e01ae56afbb35882c05f04f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ccfeb4b41e4945c01182de1959f831498d528805352fcffc861959e22a7b1c4\"" Jul 12 00:07:10.287601 containerd[1906]: time="2025-07-12T00:07:10.287569166Z" level=info msg="StartContainer for \"6ccfeb4b41e4945c01182de1959f831498d528805352fcffc861959e22a7b1c4\"" Jul 12 00:07:10.290972 containerd[1906]: time="2025-07-12T00:07:10.290941164Z" level=info msg="CreateContainer within sandbox \"f965edac8bd00318e7496f3a878747bfcfe5c5e97b95cad2cd8b97e26baa6127\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"543f1faa2615da58e38481a30d28c9346fa7988efb5fb47506cd7d29ba864431\"" Jul 12 00:07:10.296902 containerd[1906]: time="2025-07-12T00:07:10.296855641Z" level=info msg="StartContainer for \"543f1faa2615da58e38481a30d28c9346fa7988efb5fb47506cd7d29ba864431\"" Jul 12 00:07:10.354790 containerd[1906]: time="2025-07-12T00:07:10.354545095Z" level=info msg="StartContainer for \"bdd4cb92e99e091eac6b4b093e598607fb4ce5d6f621556794ff5078a624c848\" returns successfully" Jul 12 00:07:10.399190 containerd[1906]: time="2025-07-12T00:07:10.399038675Z" level=info msg="StartContainer for \"6ccfeb4b41e4945c01182de1959f831498d528805352fcffc861959e22a7b1c4\" returns successfully" Jul 12 00:07:10.400299 containerd[1906]: time="2025-07-12T00:07:10.399688914Z" level=info msg="StartContainer for \"543f1faa2615da58e38481a30d28c9346fa7988efb5fb47506cd7d29ba864431\" returns successfully" Jul 12 00:07:11.288746 kubelet[2926]: I0712 00:07:11.288511 2926 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:12.379578 kubelet[2926]: E0712 00:07:12.379528 2926 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.4-n-0fb9ec6aad\" not found" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:12.418260 kubelet[2926]: I0712 00:07:12.417548 2926 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:12.418260 kubelet[2926]: E0712 00:07:12.417593 2926 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.4-n-0fb9ec6aad\": node \"ci-4081.3.4-n-0fb9ec6aad\" not found" Jul 12 00:07:13.014547 kubelet[2926]: I0712 00:07:13.013288 2926 apiserver.go:52] "Watching apiserver" Jul 12 00:07:13.028209 kubelet[2926]: I0712 00:07:13.028173 2926 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:07:13.096576 kubelet[2926]: W0712 00:07:13.096535 2926 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:14.698199 systemd[1]: Reloading requested from client PID 3202 ('systemctl') (unit session-9.scope)... Jul 12 00:07:14.698215 systemd[1]: Reloading... Jul 12 00:07:14.796546 zram_generator::config[3245]: No configuration found. Jul 12 00:07:14.925614 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:15.011837 systemd[1]: Reloading finished in 313 ms. Jul 12 00:07:15.039902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:15.052466 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:15.052969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:15.060365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:15.245288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:15.253389 (kubelet)[3316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:15.299007 kubelet[3316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:15.299007 kubelet[3316]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:15.299007 kubelet[3316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:15.299007 kubelet[3316]: I0712 00:07:15.298636 3316 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:15.305117 kubelet[3316]: I0712 00:07:15.304983 3316 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:07:15.305117 kubelet[3316]: I0712 00:07:15.305015 3316 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:15.305340 kubelet[3316]: I0712 00:07:15.305314 3316 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:07:15.306756 kubelet[3316]: I0712 00:07:15.306734 3316 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:07:15.313341 kubelet[3316]: I0712 00:07:15.309870 3316 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:15.316485 kubelet[3316]: E0712 00:07:15.316253 3316 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:15.316485 kubelet[3316]: I0712 00:07:15.316390 3316 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:15.319422 kubelet[3316]: I0712 00:07:15.319391 3316 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:15.319779 kubelet[3316]: I0712 00:07:15.319761 3316 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:07:15.319902 kubelet[3316]: I0712 00:07:15.319871 3316 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:15.320091 kubelet[3316]: I0712 00:07:15.319899 3316 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.4-n-0fb9ec6aad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 12 00:07:15.320189 kubelet[3316]: I0712 00:07:15.320139 3316 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:15.320189 kubelet[3316]: I0712 00:07:15.320151 3316 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:07:15.320189 kubelet[3316]: I0712 00:07:15.320188 3316 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:15.320307 kubelet[3316]: I0712 00:07:15.320293 3316 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:07:15.320345 kubelet[3316]: I0712 00:07:15.320311 3316 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:15.320345 kubelet[3316]: I0712 00:07:15.320332 3316 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:07:15.320491 kubelet[3316]: I0712 00:07:15.320350 3316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:15.321395 kubelet[3316]: I0712 00:07:15.321365 3316 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:15.321937 kubelet[3316]: I0712 00:07:15.321849 3316 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:15.322557 kubelet[3316]: I0712 00:07:15.322324 3316 server.go:1274] "Started kubelet" Jul 12 00:07:15.324493 kubelet[3316]: I0712 00:07:15.324463 3316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:15.331681 kubelet[3316]: I0712 00:07:15.328913 3316 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:15.331681 kubelet[3316]: I0712 00:07:15.329862 3316 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:07:15.331681 kubelet[3316]: I0712 00:07:15.330962 3316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:15.331681 kubelet[3316]: I0712 00:07:15.331194 3316 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:15.331681 kubelet[3316]: I0712 00:07:15.331395 3316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:15.332615 kubelet[3316]: I0712 00:07:15.332378 3316 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:07:15.332615 kubelet[3316]: E0712 00:07:15.332584 3316 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.4-n-0fb9ec6aad\" not found" Jul 12 00:07:15.335577 kubelet[3316]: I0712 00:07:15.335352 3316 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:07:15.335577 kubelet[3316]: I0712 00:07:15.335501 3316 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:15.339805 kubelet[3316]: I0712 00:07:15.338608 3316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:15.342392 kubelet[3316]: I0712 00:07:15.340780 3316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:15.342392 kubelet[3316]: I0712 00:07:15.340816 3316 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:07:15.342392 kubelet[3316]: I0712 00:07:15.340842 3316 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:07:15.342392 kubelet[3316]: E0712 00:07:15.340889 3316 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:15.360113 kubelet[3316]: I0712 00:07:15.357878 3316 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:15.360113 kubelet[3316]: I0712 00:07:15.358011 3316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:15.371117 kubelet[3316]: I0712 00:07:15.370659 3316 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:15.412916 kubelet[3316]: E0712 00:07:15.412863 3316 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:07:15.552511 kubelet[3316]: E0712 00:07:15.440954 3316 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:07:15.552511 kubelet[3316]: I0712 00:07:15.452429 3316 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:07:15.552511 kubelet[3316]: I0712 00:07:15.452441 3316 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:15.552511 kubelet[3316]: I0712 00:07:15.452460 3316 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:15.553121 kubelet[3316]: I0712 00:07:15.552844 3316 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:07:15.553121 kubelet[3316]: I0712 00:07:15.552868 3316 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:07:15.553121 kubelet[3316]: I0712 00:07:15.552889 3316 policy_none.go:49] "None policy: Start" Jul 12 00:07:15.554293 kubelet[3316]: I0712 00:07:15.554268 3316 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:07:15.554509 kubelet[3316]: I0712 00:07:15.554490 3316 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:15.555352 kubelet[3316]: I0712 00:07:15.555279 3316 state_mem.go:75] "Updated machine memory state" Jul 12 00:07:15.556794 kubelet[3316]: I0712 00:07:15.556715 3316 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:15.557053 kubelet[3316]: I0712 00:07:15.556989 3316 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:15.557633 kubelet[3316]: I0712 00:07:15.557003 3316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:15.558147 kubelet[3316]: I0712 00:07:15.558067 3316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:15.650256 kubelet[3316]: W0712 00:07:15.650215 3316 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:15.655352 kubelet[3316]: W0712 00:07:15.655150 3316 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:15.655994 kubelet[3316]: W0712 00:07:15.655970 3316 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 12 00:07:15.656072 kubelet[3316]: E0712 00:07:15.656028 3316 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.4-n-0fb9ec6aad\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.667184 kubelet[3316]: I0712 00:07:15.667046 3316 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.682801 kubelet[3316]: I0712 00:07:15.682681 3316 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.684122 kubelet[3316]: I0712 00:07:15.683028 3316 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738306 kubelet[3316]: I0712 00:07:15.738236 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5514ee06ac861f524c945295bf8bf56-ca-certs\") pod \"kube-apiserver-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"b5514ee06ac861f524c945295bf8bf56\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738306 kubelet[3316]: I0712 00:07:15.738277 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5514ee06ac861f524c945295bf8bf56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"b5514ee06ac861f524c945295bf8bf56\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738630 kubelet[3316]: I0712 00:07:15.738495 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738630 kubelet[3316]: I0712 00:07:15.738523 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738630 kubelet[3316]: I0712 00:07:15.738540 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec1b331a5afdce8e7f0b6fa214cbe7bd-kubeconfig\") pod \"kube-scheduler-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"ec1b331a5afdce8e7f0b6fa214cbe7bd\") " pod="kube-system/kube-scheduler-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738630 kubelet[3316]: I0712 00:07:15.738555 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5514ee06ac861f524c945295bf8bf56-k8s-certs\") pod \"kube-apiserver-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"b5514ee06ac861f524c945295bf8bf56\") " pod="kube-system/kube-apiserver-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738630 kubelet[3316]: I0712 00:07:15.738570 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-ca-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738770 kubelet[3316]: I0712 00:07:15.738586 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:15.738770 kubelet[3316]: I0712 00:07:15.738600 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/204cceb01f392d93f4eafd98a4837eaf-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad\" (UID: \"204cceb01f392d93f4eafd98a4837eaf\") " pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:07:16.321743 kubelet[3316]: I0712 00:07:16.321469 3316 apiserver.go:52] "Watching apiserver" Jul 12 00:07:16.336338 kubelet[3316]: I0712 00:07:16.336282 3316 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:07:16.451036 kubelet[3316]: I0712 00:07:16.450816 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.4-n-0fb9ec6aad" podStartSLOduration=1.450776321 podStartE2EDuration="1.450776321s" podCreationTimestamp="2025-07-12 00:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:16.450593641 +0000 UTC m=+1.192724747" watchObservedRunningTime="2025-07-12 00:07:16.450776321 +0000 UTC m=+1.192907427" Jul 12 00:07:16.480997 kubelet[3316]: I0712 00:07:16.480929 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.4-n-0fb9ec6aad" podStartSLOduration=3.4809114660000002 podStartE2EDuration="3.480911466s" podCreationTimestamp="2025-07-12 00:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:16.480700546 +0000 UTC m=+1.222831652" watchObservedRunningTime="2025-07-12 00:07:16.480911466 +0000 UTC m=+1.223042572" Jul 12 00:07:16.481192 kubelet[3316]: I0712 00:07:16.481048 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.4-n-0fb9ec6aad" podStartSLOduration=1.4810428660000001 podStartE2EDuration="1.481042866s" podCreationTimestamp="2025-07-12 00:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:16.465230874 +0000 UTC m=+1.207361980" watchObservedRunningTime="2025-07-12 00:07:16.481042866 +0000 UTC m=+1.223174052" Jul 12 00:07:21.473837 kubelet[3316]: I0712 00:07:21.473712 3316 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:07:21.476236 containerd[1906]: time="2025-07-12T00:07:21.475994636Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:07:21.477451 kubelet[3316]: I0712 00:07:21.476277 3316 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:07:22.383364 kubelet[3316]: I0712 00:07:22.383210 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2820ab67-cfb5-4cfb-99f7-275863011207-kube-proxy\") pod \"kube-proxy-r5jrw\" (UID: \"2820ab67-cfb5-4cfb-99f7-275863011207\") " pod="kube-system/kube-proxy-r5jrw" Jul 12 00:07:22.383364 kubelet[3316]: I0712 00:07:22.383250 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2820ab67-cfb5-4cfb-99f7-275863011207-xtables-lock\") pod \"kube-proxy-r5jrw\" (UID: \"2820ab67-cfb5-4cfb-99f7-275863011207\") " pod="kube-system/kube-proxy-r5jrw" Jul 12 00:07:22.383364 kubelet[3316]: I0712 00:07:22.383272 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2820ab67-cfb5-4cfb-99f7-275863011207-lib-modules\") pod \"kube-proxy-r5jrw\" (UID: \"2820ab67-cfb5-4cfb-99f7-275863011207\") " pod="kube-system/kube-proxy-r5jrw" Jul 12 00:07:22.383364 kubelet[3316]: I0712 00:07:22.383294 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqdr\" (UniqueName: \"kubernetes.io/projected/2820ab67-cfb5-4cfb-99f7-275863011207-kube-api-access-8qqdr\") pod \"kube-proxy-r5jrw\" (UID: \"2820ab67-cfb5-4cfb-99f7-275863011207\") " pod="kube-system/kube-proxy-r5jrw" Jul 12 00:07:22.674202 containerd[1906]: time="2025-07-12T00:07:22.673855666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5jrw,Uid:2820ab67-cfb5-4cfb-99f7-275863011207,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:22.685358 kubelet[3316]: I0712 00:07:22.685145 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq24s\" (UniqueName: \"kubernetes.io/projected/0926aa4d-0deb-4cfc-91d4-e598defbb204-kube-api-access-qq24s\") pod \"tigera-operator-5bf8dfcb4-m9lm5\" (UID: \"0926aa4d-0deb-4cfc-91d4-e598defbb204\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-m9lm5" Jul 12 00:07:22.685358 kubelet[3316]: I0712 00:07:22.685222 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0926aa4d-0deb-4cfc-91d4-e598defbb204-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-m9lm5\" (UID: \"0926aa4d-0deb-4cfc-91d4-e598defbb204\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-m9lm5" Jul 12 00:07:22.733527 containerd[1906]: time="2025-07-12T00:07:22.733401317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:22.733791 containerd[1906]: time="2025-07-12T00:07:22.733586477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:22.733791 containerd[1906]: time="2025-07-12T00:07:22.733614957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:22.733791 containerd[1906]: time="2025-07-12T00:07:22.733728757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:22.767836 containerd[1906]: time="2025-07-12T00:07:22.767719661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5jrw,Uid:2820ab67-cfb5-4cfb-99f7-275863011207,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e685c8aca6d18a6b21dc4a22a652e21ed881988c982b434c2d3ee24252557cb\"" Jul 12 00:07:22.771606 containerd[1906]: time="2025-07-12T00:07:22.771536619Z" level=info msg="CreateContainer within sandbox \"6e685c8aca6d18a6b21dc4a22a652e21ed881988c982b434c2d3ee24252557cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:07:22.825377 containerd[1906]: time="2025-07-12T00:07:22.825326673Z" level=info msg="CreateContainer within sandbox \"6e685c8aca6d18a6b21dc4a22a652e21ed881988c982b434c2d3ee24252557cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aa55ea098456bf656786c3c0d8d16073b67ad1e4624725b34dd7e3eb96290ca0\"" Jul 12 00:07:22.827754 containerd[1906]: time="2025-07-12T00:07:22.826893633Z" level=info msg="StartContainer for \"aa55ea098456bf656786c3c0d8d16073b67ad1e4624725b34dd7e3eb96290ca0\"" Jul 12 00:07:22.879178 containerd[1906]: time="2025-07-12T00:07:22.879120088Z" level=info msg="StartContainer for \"aa55ea098456bf656786c3c0d8d16073b67ad1e4624725b34dd7e3eb96290ca0\" returns successfully" Jul 12 00:07:22.921785 containerd[1906]: time="2025-07-12T00:07:22.921739908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-m9lm5,Uid:0926aa4d-0deb-4cfc-91d4-e598defbb204,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:07:22.995408 containerd[1906]: time="2025-07-12T00:07:22.993425753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:22.995408 containerd[1906]: time="2025-07-12T00:07:22.993491833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:22.995408 containerd[1906]: time="2025-07-12T00:07:22.993502753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:22.995408 containerd[1906]: time="2025-07-12T00:07:22.993618433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:23.047878 containerd[1906]: time="2025-07-12T00:07:23.047839207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-m9lm5,Uid:0926aa4d-0deb-4cfc-91d4-e598defbb204,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c947df0ccfdc54dc2d954d6312e0629594fdab324165dacbf51158a89b0e7d26\"" Jul 12 00:07:23.050498 containerd[1906]: time="2025-07-12T00:07:23.050334326Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:07:23.463230 kubelet[3316]: I0712 00:07:23.462980 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r5jrw" podStartSLOduration=1.4629609700000001 podStartE2EDuration="1.46296097s" podCreationTimestamp="2025-07-12 00:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:23.46278041 +0000 UTC m=+8.204911516" watchObservedRunningTime="2025-07-12 00:07:23.46296097 +0000 UTC m=+8.205092036" Jul 12 00:07:25.595767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711643472.mount: Deactivated successfully. Jul 12 00:07:28.052282 containerd[1906]: time="2025-07-12T00:07:28.052225627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.063128 containerd[1906]: time="2025-07-12T00:07:28.063025701Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:07:28.071955 containerd[1906]: time="2025-07-12T00:07:28.071882656Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.080126 containerd[1906]: time="2025-07-12T00:07:28.080024251Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:28.081336 containerd[1906]: time="2025-07-12T00:07:28.080705250Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 5.030333084s" Jul 12 00:07:28.081336 containerd[1906]: time="2025-07-12T00:07:28.080742210Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:07:28.082955 containerd[1906]: time="2025-07-12T00:07:28.082749529Z" level=info msg="CreateContainer within sandbox \"c947df0ccfdc54dc2d954d6312e0629594fdab324165dacbf51158a89b0e7d26\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:07:28.140824 containerd[1906]: time="2025-07-12T00:07:28.140753535Z" level=info msg="CreateContainer within sandbox \"c947df0ccfdc54dc2d954d6312e0629594fdab324165dacbf51158a89b0e7d26\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"91f637eceabc59d45943982adca9f5f582787282c23d0aec48ce3b8baa61523b\"" Jul 12 00:07:28.142377 containerd[1906]: time="2025-07-12T00:07:28.141730534Z" level=info msg="StartContainer for \"91f637eceabc59d45943982adca9f5f582787282c23d0aec48ce3b8baa61523b\"" Jul 12 00:07:28.192161 containerd[1906]: time="2025-07-12T00:07:28.192029264Z" level=info msg="StartContainer for \"91f637eceabc59d45943982adca9f5f582787282c23d0aec48ce3b8baa61523b\" returns successfully" Jul 12 00:07:34.448191 sudo[2374]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:34.522963 sshd[2370]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:34.531593 systemd[1]: sshd@6-10.200.20.44:22-10.200.16.10:42786.service: Deactivated successfully. Jul 12 00:07:34.531771 systemd-logind[1763]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:07:34.538718 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:07:34.541572 systemd-logind[1763]: Removed session 9. Jul 12 00:07:40.058995 kubelet[3316]: I0712 00:07:40.058674 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-m9lm5" podStartSLOduration=13.026492724 podStartE2EDuration="18.058654367s" podCreationTimestamp="2025-07-12 00:07:22 +0000 UTC" firstStartedPulling="2025-07-12 00:07:23.049318007 +0000 UTC m=+7.791449113" lastFinishedPulling="2025-07-12 00:07:28.08147969 +0000 UTC m=+12.823610756" observedRunningTime="2025-07-12 00:07:28.464039783 +0000 UTC m=+13.206170889" watchObservedRunningTime="2025-07-12 00:07:40.058654367 +0000 UTC m=+24.800785473" Jul 12 00:07:40.089466 kubelet[3316]: I0712 00:07:40.089218 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1c9b285-cd89-4bb5-8948-9036d99b5ea2-tigera-ca-bundle\") pod \"calico-typha-789745449c-cz2md\" (UID: \"d1c9b285-cd89-4bb5-8948-9036d99b5ea2\") " pod="calico-system/calico-typha-789745449c-cz2md" Jul 12 00:07:40.089466 kubelet[3316]: I0712 00:07:40.089288 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d1c9b285-cd89-4bb5-8948-9036d99b5ea2-typha-certs\") pod \"calico-typha-789745449c-cz2md\" (UID: \"d1c9b285-cd89-4bb5-8948-9036d99b5ea2\") " pod="calico-system/calico-typha-789745449c-cz2md" Jul 12 00:07:40.089466 kubelet[3316]: I0712 00:07:40.089310 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmcff\" (UniqueName: \"kubernetes.io/projected/d1c9b285-cd89-4bb5-8948-9036d99b5ea2-kube-api-access-dmcff\") pod \"calico-typha-789745449c-cz2md\" (UID: \"d1c9b285-cd89-4bb5-8948-9036d99b5ea2\") " pod="calico-system/calico-typha-789745449c-cz2md" Jul 12 00:07:40.290565 kubelet[3316]: I0712 00:07:40.290024 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-var-lib-calico\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290565 kubelet[3316]: I0712 00:07:40.290063 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-cni-net-dir\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290565 kubelet[3316]: I0712 00:07:40.290081 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-flexvol-driver-host\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290565 kubelet[3316]: I0712 00:07:40.290119 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2ec914e5-336b-452a-b109-69069525ade8-node-certs\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290565 kubelet[3316]: I0712 00:07:40.290140 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-cni-bin-dir\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290889 kubelet[3316]: I0712 00:07:40.290157 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-policysync\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290889 kubelet[3316]: I0712 00:07:40.290172 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-cni-log-dir\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290889 kubelet[3316]: I0712 00:07:40.290187 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nldn\" (UniqueName: \"kubernetes.io/projected/2ec914e5-336b-452a-b109-69069525ade8-kube-api-access-9nldn\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290889 kubelet[3316]: I0712 00:07:40.290204 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-var-run-calico\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.290889 kubelet[3316]: I0712 00:07:40.290217 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-xtables-lock\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.291061 kubelet[3316]: I0712 00:07:40.290233 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ec914e5-336b-452a-b109-69069525ade8-tigera-ca-bundle\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.291061 kubelet[3316]: I0712 00:07:40.290250 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ec914e5-336b-452a-b109-69069525ade8-lib-modules\") pod \"calico-node-2d49c\" (UID: \"2ec914e5-336b-452a-b109-69069525ade8\") " pod="calico-system/calico-node-2d49c" Jul 12 00:07:40.340985 kubelet[3316]: E0712 00:07:40.340429 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wgr4" podUID="b68f90c7-c121-47a4-9328-c85559bf7c5c" Jul 12 00:07:40.381644 containerd[1906]: time="2025-07-12T00:07:40.381550938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-789745449c-cz2md,Uid:d1c9b285-cd89-4bb5-8948-9036d99b5ea2,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:40.391158 kubelet[3316]: I0712 00:07:40.390976 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b68f90c7-c121-47a4-9328-c85559bf7c5c-kubelet-dir\") pod \"csi-node-driver-7wgr4\" (UID: \"b68f90c7-c121-47a4-9328-c85559bf7c5c\") " pod="calico-system/csi-node-driver-7wgr4" Jul 12 00:07:40.391320 kubelet[3316]: I0712 00:07:40.391296 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b68f90c7-c121-47a4-9328-c85559bf7c5c-socket-dir\") pod \"csi-node-driver-7wgr4\" (UID: \"b68f90c7-c121-47a4-9328-c85559bf7c5c\") " pod="calico-system/csi-node-driver-7wgr4" Jul 12 00:07:40.391354 kubelet[3316]: I0712 00:07:40.391319 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ktzc\" (UniqueName: \"kubernetes.io/projected/b68f90c7-c121-47a4-9328-c85559bf7c5c-kube-api-access-7ktzc\") pod \"csi-node-driver-7wgr4\" (UID: \"b68f90c7-c121-47a4-9328-c85559bf7c5c\") " pod="calico-system/csi-node-driver-7wgr4" Jul 12 00:07:40.391884 kubelet[3316]: I0712 00:07:40.391569 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b68f90c7-c121-47a4-9328-c85559bf7c5c-varrun\") pod \"csi-node-driver-7wgr4\" (UID: \"b68f90c7-c121-47a4-9328-c85559bf7c5c\") " pod="calico-system/csi-node-driver-7wgr4" Jul 12 00:07:40.392165 kubelet[3316]: I0712 00:07:40.392131 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b68f90c7-c121-47a4-9328-c85559bf7c5c-registration-dir\") pod \"csi-node-driver-7wgr4\" (UID: \"b68f90c7-c121-47a4-9328-c85559bf7c5c\") " pod="calico-system/csi-node-driver-7wgr4" Jul 12 00:07:40.401303 kubelet[3316]: E0712 00:07:40.401199 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.401303 kubelet[3316]: W0712 00:07:40.401226 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.401303 kubelet[3316]: E0712 00:07:40.401257 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.406773 kubelet[3316]: E0712 00:07:40.406695 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.406773 kubelet[3316]: W0712 00:07:40.406716 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.406773 kubelet[3316]: E0712 00:07:40.406735 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.452686 kubelet[3316]: E0712 00:07:40.452640 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.452686 kubelet[3316]: W0712 00:07:40.452665 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.452686 kubelet[3316]: E0712 00:07:40.452685 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.469795 containerd[1906]: time="2025-07-12T00:07:40.469420526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:40.469795 containerd[1906]: time="2025-07-12T00:07:40.469495206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:40.469795 containerd[1906]: time="2025-07-12T00:07:40.469612326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:40.470852 containerd[1906]: time="2025-07-12T00:07:40.470695405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:40.496179 kubelet[3316]: E0712 00:07:40.495993 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.496179 kubelet[3316]: W0712 00:07:40.496028 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.496989 kubelet[3316]: E0712 00:07:40.496095 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.497161 kubelet[3316]: E0712 00:07:40.497136 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.497255 kubelet[3316]: W0712 00:07:40.497243 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.497403 kubelet[3316]: E0712 00:07:40.497328 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.497862 kubelet[3316]: E0712 00:07:40.497847 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.498398 kubelet[3316]: W0712 00:07:40.498273 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.498398 kubelet[3316]: E0712 00:07:40.498302 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.499032 kubelet[3316]: E0712 00:07:40.498950 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.499032 kubelet[3316]: W0712 00:07:40.498965 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.499032 kubelet[3316]: E0712 00:07:40.499011 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.499376 kubelet[3316]: E0712 00:07:40.499313 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.499518 kubelet[3316]: W0712 00:07:40.499326 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.499518 kubelet[3316]: E0712 00:07:40.499470 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.500683 kubelet[3316]: E0712 00:07:40.500140 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.501028 kubelet[3316]: W0712 00:07:40.500784 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.501028 kubelet[3316]: E0712 00:07:40.500835 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.504161 kubelet[3316]: E0712 00:07:40.502496 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.504161 kubelet[3316]: W0712 00:07:40.502512 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.504161 kubelet[3316]: E0712 00:07:40.502847 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.504759 kubelet[3316]: E0712 00:07:40.504504 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.504759 kubelet[3316]: W0712 00:07:40.504518 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.505038 kubelet[3316]: E0712 00:07:40.505003 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.505239 kubelet[3316]: E0712 00:07:40.505205 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.505239 kubelet[3316]: W0712 00:07:40.505216 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.505598 kubelet[3316]: E0712 00:07:40.505362 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.506643 kubelet[3316]: E0712 00:07:40.506620 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.507838 kubelet[3316]: W0712 00:07:40.507071 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.507838 kubelet[3316]: E0712 00:07:40.507732 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.508067 kubelet[3316]: E0712 00:07:40.508040 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.508226 kubelet[3316]: W0712 00:07:40.508122 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.508226 kubelet[3316]: E0712 00:07:40.508162 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.508526 kubelet[3316]: E0712 00:07:40.508485 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.508526 kubelet[3316]: W0712 00:07:40.508498 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.508705 kubelet[3316]: E0712 00:07:40.508630 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.511135 kubelet[3316]: E0712 00:07:40.509262 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.511135 kubelet[3316]: W0712 00:07:40.509280 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.511135 kubelet[3316]: E0712 00:07:40.509312 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.511441 kubelet[3316]: E0712 00:07:40.511337 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.511441 kubelet[3316]: W0712 00:07:40.511366 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.511441 kubelet[3316]: E0712 00:07:40.511407 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.513627 kubelet[3316]: E0712 00:07:40.513512 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.513627 kubelet[3316]: W0712 00:07:40.513531 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.513627 kubelet[3316]: E0712 00:07:40.513572 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.514813 kubelet[3316]: E0712 00:07:40.514796 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.515526 kubelet[3316]: W0712 00:07:40.514915 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.515526 kubelet[3316]: E0712 00:07:40.515493 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.516307 kubelet[3316]: E0712 00:07:40.515955 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.516596 kubelet[3316]: W0712 00:07:40.516495 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.516596 kubelet[3316]: E0712 00:07:40.516577 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.516888 kubelet[3316]: E0712 00:07:40.516800 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.516888 kubelet[3316]: W0712 00:07:40.516812 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.517050 kubelet[3316]: E0712 00:07:40.517024 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.517240 kubelet[3316]: E0712 00:07:40.517141 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.517240 kubelet[3316]: W0712 00:07:40.517163 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.517240 kubelet[3316]: E0712 00:07:40.517193 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.517862 kubelet[3316]: E0712 00:07:40.517480 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.517862 kubelet[3316]: W0712 00:07:40.517504 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.517862 kubelet[3316]: E0712 00:07:40.517578 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.518334 kubelet[3316]: E0712 00:07:40.518211 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.518334 kubelet[3316]: W0712 00:07:40.518230 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.518334 kubelet[3316]: E0712 00:07:40.518264 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.518662 kubelet[3316]: E0712 00:07:40.518567 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.518662 kubelet[3316]: W0712 00:07:40.518580 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.518662 kubelet[3316]: E0712 00:07:40.518613 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.519018 kubelet[3316]: E0712 00:07:40.518906 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.519018 kubelet[3316]: W0712 00:07:40.518917 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.519292 kubelet[3316]: E0712 00:07:40.519229 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.520331 kubelet[3316]: E0712 00:07:40.519920 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.520331 kubelet[3316]: W0712 00:07:40.519936 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.520331 kubelet[3316]: E0712 00:07:40.519956 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.522528 kubelet[3316]: E0712 00:07:40.522503 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.522528 kubelet[3316]: W0712 00:07:40.522523 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.522621 kubelet[3316]: E0712 00:07:40.522539 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.544213 containerd[1906]: time="2025-07-12T00:07:40.543475002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2d49c,Uid:2ec914e5-336b-452a-b109-69069525ade8,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:40.552758 kubelet[3316]: E0712 00:07:40.552685 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:40.553621 kubelet[3316]: W0712 00:07:40.553257 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:40.553621 kubelet[3316]: E0712 00:07:40.553289 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:40.582349 containerd[1906]: time="2025-07-12T00:07:40.582300020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-789745449c-cz2md,Uid:d1c9b285-cd89-4bb5-8948-9036d99b5ea2,Namespace:calico-system,Attempt:0,} returns sandbox id \"90d6e781b5eb03073983a319d98c9570850e2fb5bf60acf16b9313f8c702a7aa\"" Jul 12 00:07:40.586603 containerd[1906]: time="2025-07-12T00:07:40.586044617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:07:40.611497 containerd[1906]: time="2025-07-12T00:07:40.610531723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:40.611497 containerd[1906]: time="2025-07-12T00:07:40.610592443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:40.611497 containerd[1906]: time="2025-07-12T00:07:40.610608123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:40.611497 containerd[1906]: time="2025-07-12T00:07:40.610703163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:40.654288 containerd[1906]: time="2025-07-12T00:07:40.654171577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2d49c,Uid:2ec914e5-336b-452a-b109-69069525ade8,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd5c9eaaf7da8c5e68d78d5d36390155dfe4d74497f21c97719fad66dd3347ee\"" Jul 12 00:07:42.003579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1900591055.mount: Deactivated successfully. Jul 12 00:07:42.341227 kubelet[3316]: E0712 00:07:42.341115 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wgr4" podUID="b68f90c7-c121-47a4-9328-c85559bf7c5c" Jul 12 00:07:43.112639 containerd[1906]: time="2025-07-12T00:07:43.112586872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:43.115862 containerd[1906]: time="2025-07-12T00:07:43.115826030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 12 00:07:43.123381 containerd[1906]: time="2025-07-12T00:07:43.123295186Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:43.135130 containerd[1906]: time="2025-07-12T00:07:43.135065300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:43.136426 containerd[1906]: time="2025-07-12T00:07:43.136027500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.549149483s" Jul 12 00:07:43.136426 containerd[1906]: time="2025-07-12T00:07:43.136060660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:07:43.137273 containerd[1906]: time="2025-07-12T00:07:43.137253379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:07:43.149409 containerd[1906]: time="2025-07-12T00:07:43.149368013Z" level=info msg="CreateContainer within sandbox \"90d6e781b5eb03073983a319d98c9570850e2fb5bf60acf16b9313f8c702a7aa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:07:43.213647 containerd[1906]: time="2025-07-12T00:07:43.213583141Z" level=info msg="CreateContainer within sandbox \"90d6e781b5eb03073983a319d98c9570850e2fb5bf60acf16b9313f8c702a7aa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1721ce065682f6eaf829e74bb027bc2d83277b8be0c94d2d657ba683d0cd5492\"" Jul 12 00:07:43.214924 containerd[1906]: time="2025-07-12T00:07:43.214739180Z" level=info msg="StartContainer for \"1721ce065682f6eaf829e74bb027bc2d83277b8be0c94d2d657ba683d0cd5492\"" Jul 12 00:07:43.273583 containerd[1906]: time="2025-07-12T00:07:43.273471831Z" level=info msg="StartContainer for \"1721ce065682f6eaf829e74bb027bc2d83277b8be0c94d2d657ba683d0cd5492\" returns successfully" Jul 12 00:07:43.510971 kubelet[3316]: E0712 00:07:43.510850 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.511692 kubelet[3316]: W0712 00:07:43.511208 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.511692 kubelet[3316]: E0712 00:07:43.511238 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.512641 kubelet[3316]: E0712 00:07:43.512157 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.512641 kubelet[3316]: W0712 00:07:43.512173 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.512641 kubelet[3316]: E0712 00:07:43.512186 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.512641 kubelet[3316]: E0712 00:07:43.512539 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.512641 kubelet[3316]: W0712 00:07:43.512550 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.512641 kubelet[3316]: E0712 00:07:43.512562 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.513081 kubelet[3316]: E0712 00:07:43.513005 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.513081 kubelet[3316]: W0712 00:07:43.513015 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.513081 kubelet[3316]: E0712 00:07:43.513026 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.513534 kubelet[3316]: E0712 00:07:43.513437 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.513534 kubelet[3316]: W0712 00:07:43.513451 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.513534 kubelet[3316]: E0712 00:07:43.513462 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.513861 kubelet[3316]: E0712 00:07:43.513776 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.513861 kubelet[3316]: W0712 00:07:43.513788 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.513861 kubelet[3316]: E0712 00:07:43.513799 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.514167 kubelet[3316]: E0712 00:07:43.514156 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.514280 kubelet[3316]: W0712 00:07:43.514232 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.514280 kubelet[3316]: E0712 00:07:43.514247 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.514605 kubelet[3316]: E0712 00:07:43.514506 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.514605 kubelet[3316]: W0712 00:07:43.514530 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.514605 kubelet[3316]: E0712 00:07:43.514541 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.514932 kubelet[3316]: E0712 00:07:43.514908 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.515082 kubelet[3316]: W0712 00:07:43.514987 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.515082 kubelet[3316]: E0712 00:07:43.515002 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.515312 kubelet[3316]: E0712 00:07:43.515302 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.515401 kubelet[3316]: W0712 00:07:43.515355 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.515401 kubelet[3316]: E0712 00:07:43.515368 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.515684 kubelet[3316]: E0712 00:07:43.515602 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.515684 kubelet[3316]: W0712 00:07:43.515613 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.515684 kubelet[3316]: E0712 00:07:43.515622 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.516001 kubelet[3316]: E0712 00:07:43.515968 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.516001 kubelet[3316]: W0712 00:07:43.515979 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.516204 kubelet[3316]: E0712 00:07:43.516080 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.516447 kubelet[3316]: E0712 00:07:43.516363 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.516447 kubelet[3316]: W0712 00:07:43.516374 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.516447 kubelet[3316]: E0712 00:07:43.516384 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.516740 kubelet[3316]: E0712 00:07:43.516663 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.516740 kubelet[3316]: W0712 00:07:43.516674 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.516740 kubelet[3316]: E0712 00:07:43.516684 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.516994 kubelet[3316]: E0712 00:07:43.516923 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.516994 kubelet[3316]: W0712 00:07:43.516933 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.516994 kubelet[3316]: E0712 00:07:43.516942 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.522653 kubelet[3316]: E0712 00:07:43.522630 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.522653 kubelet[3316]: W0712 00:07:43.522649 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.522837 kubelet[3316]: E0712 00:07:43.522664 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.522957 kubelet[3316]: E0712 00:07:43.522941 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.522957 kubelet[3316]: W0712 00:07:43.522955 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.523051 kubelet[3316]: E0712 00:07:43.522972 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.523255 kubelet[3316]: E0712 00:07:43.523239 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.523255 kubelet[3316]: W0712 00:07:43.523253 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.523353 kubelet[3316]: E0712 00:07:43.523271 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.523495 kubelet[3316]: E0712 00:07:43.523478 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.523495 kubelet[3316]: W0712 00:07:43.523493 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.523572 kubelet[3316]: E0712 00:07:43.523508 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.523751 kubelet[3316]: E0712 00:07:43.523736 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.523751 kubelet[3316]: W0712 00:07:43.523749 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.523854 kubelet[3316]: E0712 00:07:43.523764 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.523936 kubelet[3316]: E0712 00:07:43.523919 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.523936 kubelet[3316]: W0712 00:07:43.523932 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.523994 kubelet[3316]: E0712 00:07:43.523947 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.524184 kubelet[3316]: E0712 00:07:43.524168 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.524184 kubelet[3316]: W0712 00:07:43.524182 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.525136 kubelet[3316]: E0712 00:07:43.524953 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.525136 kubelet[3316]: W0712 00:07:43.524965 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.525136 kubelet[3316]: E0712 00:07:43.525065 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.526081 kubelet[3316]: E0712 00:07:43.525418 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.526081 kubelet[3316]: E0712 00:07:43.525578 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.526081 kubelet[3316]: W0712 00:07:43.525590 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.526081 kubelet[3316]: E0712 00:07:43.525607 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.526851 kubelet[3316]: E0712 00:07:43.526814 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.527636 kubelet[3316]: W0712 00:07:43.527021 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.527636 kubelet[3316]: E0712 00:07:43.527072 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.528037 kubelet[3316]: E0712 00:07:43.527953 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.528037 kubelet[3316]: W0712 00:07:43.527968 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.528037 kubelet[3316]: E0712 00:07:43.528002 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.529383 kubelet[3316]: E0712 00:07:43.529237 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.529383 kubelet[3316]: W0712 00:07:43.529254 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.530297 kubelet[3316]: E0712 00:07:43.530278 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.530473 kubelet[3316]: W0712 00:07:43.530459 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.530734 kubelet[3316]: E0712 00:07:43.530533 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.532846 kubelet[3316]: E0712 00:07:43.532564 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.532846 kubelet[3316]: W0712 00:07:43.532582 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.532846 kubelet[3316]: E0712 00:07:43.532599 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.533827 kubelet[3316]: E0712 00:07:43.533415 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.534170 kubelet[3316]: E0712 00:07:43.534148 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.534170 kubelet[3316]: W0712 00:07:43.534166 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.534170 kubelet[3316]: E0712 00:07:43.534185 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.534829 kubelet[3316]: E0712 00:07:43.534803 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.534829 kubelet[3316]: W0712 00:07:43.534820 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.535517 kubelet[3316]: E0712 00:07:43.534973 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.535848 kubelet[3316]: E0712 00:07:43.535726 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.535848 kubelet[3316]: W0712 00:07:43.535750 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.535848 kubelet[3316]: E0712 00:07:43.535770 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:43.535963 kubelet[3316]: E0712 00:07:43.535926 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:43.535963 kubelet[3316]: W0712 00:07:43.535935 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:43.535963 kubelet[3316]: E0712 00:07:43.535944 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.341367 kubelet[3316]: E0712 00:07:44.341308 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wgr4" podUID="b68f90c7-c121-47a4-9328-c85559bf7c5c" Jul 12 00:07:44.499806 kubelet[3316]: I0712 00:07:44.499770 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:07:44.523745 kubelet[3316]: E0712 00:07:44.523711 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.523745 kubelet[3316]: W0712 00:07:44.523735 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.524336 kubelet[3316]: E0712 00:07:44.523753 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.524336 kubelet[3316]: E0712 00:07:44.523910 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.524336 kubelet[3316]: W0712 00:07:44.523919 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.524336 kubelet[3316]: E0712 00:07:44.523928 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.524336 kubelet[3316]: E0712 00:07:44.524053 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.524336 kubelet[3316]: W0712 00:07:44.524060 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.524336 kubelet[3316]: E0712 00:07:44.524068 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.524336 kubelet[3316]: E0712 00:07:44.524223 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.524336 kubelet[3316]: W0712 00:07:44.524231 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.524336 kubelet[3316]: E0712 00:07:44.524261 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.524747 kubelet[3316]: E0712 00:07:44.524415 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.524747 kubelet[3316]: W0712 00:07:44.524422 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.524747 kubelet[3316]: E0712 00:07:44.524431 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.524747 kubelet[3316]: E0712 00:07:44.524582 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.524747 kubelet[3316]: W0712 00:07:44.524589 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.524747 kubelet[3316]: E0712 00:07:44.524597 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.524747 kubelet[3316]: E0712 00:07:44.524721 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.524747 kubelet[3316]: W0712 00:07:44.524729 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.524747 kubelet[3316]: E0712 00:07:44.524736 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.525061 kubelet[3316]: E0712 00:07:44.524856 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.525061 kubelet[3316]: W0712 00:07:44.524862 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.525061 kubelet[3316]: E0712 00:07:44.524869 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.525061 kubelet[3316]: E0712 00:07:44.524998 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.525061 kubelet[3316]: W0712 00:07:44.525004 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.525061 kubelet[3316]: E0712 00:07:44.525012 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.525288 kubelet[3316]: E0712 00:07:44.525150 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.525288 kubelet[3316]: W0712 00:07:44.525157 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.525288 kubelet[3316]: E0712 00:07:44.525166 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.525288 kubelet[3316]: E0712 00:07:44.525286 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.525406 kubelet[3316]: W0712 00:07:44.525292 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.525406 kubelet[3316]: E0712 00:07:44.525299 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.525483 kubelet[3316]: E0712 00:07:44.525418 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.525483 kubelet[3316]: W0712 00:07:44.525424 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.525483 kubelet[3316]: E0712 00:07:44.525431 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.525613 kubelet[3316]: E0712 00:07:44.525546 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.525613 kubelet[3316]: W0712 00:07:44.525552 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.525613 kubelet[3316]: E0712 00:07:44.525560 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.525709 kubelet[3316]: E0712 00:07:44.525679 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.525709 kubelet[3316]: W0712 00:07:44.525685 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.525709 kubelet[3316]: E0712 00:07:44.525692 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.525813 kubelet[3316]: E0712 00:07:44.525807 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.525847 kubelet[3316]: W0712 00:07:44.525814 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.525847 kubelet[3316]: E0712 00:07:44.525821 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.529248 kubelet[3316]: E0712 00:07:44.529131 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.529248 kubelet[3316]: W0712 00:07:44.529146 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.529248 kubelet[3316]: E0712 00:07:44.529158 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.529442 kubelet[3316]: E0712 00:07:44.529420 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.529483 kubelet[3316]: W0712 00:07:44.529447 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.529483 kubelet[3316]: E0712 00:07:44.529466 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.529720 kubelet[3316]: E0712 00:07:44.529703 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.529720 kubelet[3316]: W0712 00:07:44.529718 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.529797 kubelet[3316]: E0712 00:07:44.529735 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.529959 kubelet[3316]: E0712 00:07:44.529944 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.529959 kubelet[3316]: W0712 00:07:44.529957 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.530020 kubelet[3316]: E0712 00:07:44.529971 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.530198 kubelet[3316]: E0712 00:07:44.530179 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.530198 kubelet[3316]: W0712 00:07:44.530195 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.530273 kubelet[3316]: E0712 00:07:44.530209 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.530435 kubelet[3316]: E0712 00:07:44.530346 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.530435 kubelet[3316]: W0712 00:07:44.530359 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.530435 kubelet[3316]: E0712 00:07:44.530370 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.530842 kubelet[3316]: E0712 00:07:44.530685 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.530842 kubelet[3316]: W0712 00:07:44.530699 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.530842 kubelet[3316]: E0712 00:07:44.530720 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.531298 kubelet[3316]: E0712 00:07:44.531151 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.531298 kubelet[3316]: W0712 00:07:44.531166 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.531298 kubelet[3316]: E0712 00:07:44.531230 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.531591 kubelet[3316]: E0712 00:07:44.531465 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.531591 kubelet[3316]: W0712 00:07:44.531479 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.531591 kubelet[3316]: E0712 00:07:44.531515 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.531843 kubelet[3316]: E0712 00:07:44.531826 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.531984 kubelet[3316]: W0712 00:07:44.531895 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.531984 kubelet[3316]: E0712 00:07:44.531965 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.532190 kubelet[3316]: E0712 00:07:44.532177 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.532831 kubelet[3316]: W0712 00:07:44.532647 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.532831 kubelet[3316]: E0712 00:07:44.532687 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.533046 kubelet[3316]: E0712 00:07:44.533033 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.533288 kubelet[3316]: W0712 00:07:44.533150 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.533503 kubelet[3316]: E0712 00:07:44.533430 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.533710 kubelet[3316]: E0712 00:07:44.533607 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.533710 kubelet[3316]: W0712 00:07:44.533618 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.533710 kubelet[3316]: E0712 00:07:44.533704 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.533950 kubelet[3316]: E0712 00:07:44.533859 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.533950 kubelet[3316]: W0712 00:07:44.533870 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.533950 kubelet[3316]: E0712 00:07:44.533892 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.534110 kubelet[3316]: E0712 00:07:44.534084 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.534179 kubelet[3316]: W0712 00:07:44.534167 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.534323 kubelet[3316]: E0712 00:07:44.534232 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.534424 kubelet[3316]: E0712 00:07:44.534413 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.534480 kubelet[3316]: W0712 00:07:44.534469 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.534535 kubelet[3316]: E0712 00:07:44.534525 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.534755 kubelet[3316]: E0712 00:07:44.534743 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.535198 kubelet[3316]: W0712 00:07:44.534817 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.535198 kubelet[3316]: E0712 00:07:44.534836 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.535355 kubelet[3316]: E0712 00:07:44.535342 3316 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:07:44.535407 kubelet[3316]: W0712 00:07:44.535396 3316 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:07:44.535462 kubelet[3316]: E0712 00:07:44.535452 3316 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:07:44.864494 containerd[1906]: time="2025-07-12T00:07:44.864444072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:44.869457 containerd[1906]: time="2025-07-12T00:07:44.869301110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 12 00:07:44.873013 containerd[1906]: time="2025-07-12T00:07:44.872959228Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:44.879241 containerd[1906]: time="2025-07-12T00:07:44.879157865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:44.879901 containerd[1906]: time="2025-07-12T00:07:44.879771545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.742402726s" Jul 12 00:07:44.879901 containerd[1906]: time="2025-07-12T00:07:44.879806185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:07:44.882288 containerd[1906]: time="2025-07-12T00:07:44.882252303Z" level=info msg="CreateContainer within sandbox \"dd5c9eaaf7da8c5e68d78d5d36390155dfe4d74497f21c97719fad66dd3347ee\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:07:44.938515 containerd[1906]: time="2025-07-12T00:07:44.938469355Z" level=info msg="CreateContainer within sandbox \"dd5c9eaaf7da8c5e68d78d5d36390155dfe4d74497f21c97719fad66dd3347ee\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"10615de6d5157559cb11eb5a359cc2de09c41a947575234e33a0c633ff57ccca\"" Jul 12 00:07:44.939246 containerd[1906]: time="2025-07-12T00:07:44.939213195Z" level=info msg="StartContainer for \"10615de6d5157559cb11eb5a359cc2de09c41a947575234e33a0c633ff57ccca\"" Jul 12 00:07:44.990414 containerd[1906]: time="2025-07-12T00:07:44.990299489Z" level=info msg="StartContainer for \"10615de6d5157559cb11eb5a359cc2de09c41a947575234e33a0c633ff57ccca\" returns successfully" Jul 12 00:07:45.142523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10615de6d5157559cb11eb5a359cc2de09c41a947575234e33a0c633ff57ccca-rootfs.mount: Deactivated successfully. Jul 12 00:07:45.520766 kubelet[3316]: I0712 00:07:45.520695 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-789745449c-cz2md" podStartSLOduration=2.969398142 podStartE2EDuration="5.520679223s" podCreationTimestamp="2025-07-12 00:07:40 +0000 UTC" firstStartedPulling="2025-07-12 00:07:40.585699418 +0000 UTC m=+25.327830524" lastFinishedPulling="2025-07-12 00:07:43.136980499 +0000 UTC m=+27.879111605" observedRunningTime="2025-07-12 00:07:43.53339794 +0000 UTC m=+28.275529046" watchObservedRunningTime="2025-07-12 00:07:45.520679223 +0000 UTC m=+30.262810329" Jul 12 00:07:46.055856 containerd[1906]: time="2025-07-12T00:07:45.636409565Z" level=error msg="collecting metrics for 10615de6d5157559cb11eb5a359cc2de09c41a947575234e33a0c633ff57ccca" error="cgroups: cgroup deleted: unknown" Jul 12 00:07:46.075830 containerd[1906]: time="2025-07-12T00:07:46.075770784Z" level=info msg="shim disconnected" id=10615de6d5157559cb11eb5a359cc2de09c41a947575234e33a0c633ff57ccca namespace=k8s.io Jul 12 00:07:46.075830 containerd[1906]: time="2025-07-12T00:07:46.075825544Z" level=warning msg="cleaning up after shim disconnected" id=10615de6d5157559cb11eb5a359cc2de09c41a947575234e33a0c633ff57ccca namespace=k8s.io Jul 12 00:07:46.075830 containerd[1906]: time="2025-07-12T00:07:46.075836304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:07:46.342061 kubelet[3316]: E0712 00:07:46.341749 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wgr4" podUID="b68f90c7-c121-47a4-9328-c85559bf7c5c" Jul 12 00:07:46.508674 containerd[1906]: time="2025-07-12T00:07:46.508382687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:07:48.341546 kubelet[3316]: E0712 00:07:48.341494 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wgr4" podUID="b68f90c7-c121-47a4-9328-c85559bf7c5c" Jul 12 00:07:49.878653 containerd[1906]: time="2025-07-12T00:07:49.877923956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:49.881235 containerd[1906]: time="2025-07-12T00:07:49.881191115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:07:49.886168 containerd[1906]: time="2025-07-12T00:07:49.886121472Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:49.891810 containerd[1906]: time="2025-07-12T00:07:49.891765869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:49.892563 containerd[1906]: time="2025-07-12T00:07:49.892452669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.384030182s" Jul 12 00:07:49.892563 containerd[1906]: time="2025-07-12T00:07:49.892481349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:07:49.895204 containerd[1906]: time="2025-07-12T00:07:49.895068068Z" level=info msg="CreateContainer within sandbox \"dd5c9eaaf7da8c5e68d78d5d36390155dfe4d74497f21c97719fad66dd3347ee\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:07:49.945350 containerd[1906]: time="2025-07-12T00:07:49.945304082Z" level=info msg="CreateContainer within sandbox \"dd5c9eaaf7da8c5e68d78d5d36390155dfe4d74497f21c97719fad66dd3347ee\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2ab34990d9dd81af77dc198c127285b350d3d61d6f99a7212dcf799aa8d1167\"" Jul 12 00:07:49.946126 containerd[1906]: time="2025-07-12T00:07:49.945876122Z" level=info msg="StartContainer for \"e2ab34990d9dd81af77dc198c127285b350d3d61d6f99a7212dcf799aa8d1167\"" Jul 12 00:07:50.000259 containerd[1906]: time="2025-07-12T00:07:50.000131655Z" level=info msg="StartContainer for \"e2ab34990d9dd81af77dc198c127285b350d3d61d6f99a7212dcf799aa8d1167\" returns successfully" Jul 12 00:07:50.341986 kubelet[3316]: E0712 00:07:50.341658 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wgr4" podUID="b68f90c7-c121-47a4-9328-c85559bf7c5c" Jul 12 00:07:51.185616 containerd[1906]: time="2025-07-12T00:07:51.185567059Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:07:51.206186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2ab34990d9dd81af77dc198c127285b350d3d61d6f99a7212dcf799aa8d1167-rootfs.mount: Deactivated successfully. Jul 12 00:07:51.216001 kubelet[3316]: I0712 00:07:51.215969 3316 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:07:51.275404 kubelet[3316]: I0712 00:07:51.275361 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-whisker-backend-key-pair\") pod \"whisker-6488b96975-7fwjv\" (UID: \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\") " pod="calico-system/whisker-6488b96975-7fwjv" Jul 12 00:07:51.275404 kubelet[3316]: I0712 00:07:51.275401 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8289963-d035-4bbc-9efb-a53e9428a42b-config\") pod \"goldmane-58fd7646b9-jddn9\" (UID: \"a8289963-d035-4bbc-9efb-a53e9428a42b\") " pod="calico-system/goldmane-58fd7646b9-jddn9" Jul 12 00:07:51.275762 kubelet[3316]: I0712 00:07:51.275429 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8289963-d035-4bbc-9efb-a53e9428a42b-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-jddn9\" (UID: \"a8289963-d035-4bbc-9efb-a53e9428a42b\") " pod="calico-system/goldmane-58fd7646b9-jddn9" Jul 12 00:07:51.275762 kubelet[3316]: I0712 00:07:51.275446 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh686\" (UniqueName: \"kubernetes.io/projected/a8289963-d035-4bbc-9efb-a53e9428a42b-kube-api-access-zh686\") pod \"goldmane-58fd7646b9-jddn9\" (UID: \"a8289963-d035-4bbc-9efb-a53e9428a42b\") " pod="calico-system/goldmane-58fd7646b9-jddn9" Jul 12 00:07:51.275762 kubelet[3316]: I0712 00:07:51.275479 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d03f5dbf-aed9-446d-a8ba-7e995cb04351-tigera-ca-bundle\") pod \"calico-kube-controllers-5649f74b8d-dpdd7\" (UID: \"d03f5dbf-aed9-446d-a8ba-7e995cb04351\") " pod="calico-system/calico-kube-controllers-5649f74b8d-dpdd7" Jul 12 00:07:51.275762 kubelet[3316]: I0712 00:07:51.275504 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-947bj\" (UniqueName: \"kubernetes.io/projected/d03f5dbf-aed9-446d-a8ba-7e995cb04351-kube-api-access-947bj\") pod \"calico-kube-controllers-5649f74b8d-dpdd7\" (UID: \"d03f5dbf-aed9-446d-a8ba-7e995cb04351\") " pod="calico-system/calico-kube-controllers-5649f74b8d-dpdd7" Jul 12 00:07:51.275762 kubelet[3316]: I0712 00:07:51.275523 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a8289963-d035-4bbc-9efb-a53e9428a42b-goldmane-key-pair\") pod \"goldmane-58fd7646b9-jddn9\" (UID: \"a8289963-d035-4bbc-9efb-a53e9428a42b\") " pod="calico-system/goldmane-58fd7646b9-jddn9" Jul 12 00:07:51.276014 kubelet[3316]: I0712 00:07:51.275537 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f98b2\" (UniqueName: \"kubernetes.io/projected/90652f12-5f24-4f92-ba31-8e8fc442c377-kube-api-access-f98b2\") pod \"calico-apiserver-b47f68fd8-ttd9w\" (UID: \"90652f12-5f24-4f92-ba31-8e8fc442c377\") " pod="calico-apiserver/calico-apiserver-b47f68fd8-ttd9w" Jul 12 00:07:51.276014 kubelet[3316]: I0712 00:07:51.275557 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-whisker-ca-bundle\") pod \"whisker-6488b96975-7fwjv\" (UID: \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\") " pod="calico-system/whisker-6488b96975-7fwjv" Jul 12 00:07:51.276233 kubelet[3316]: I0712 00:07:51.276151 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cnf9\" (UniqueName: \"kubernetes.io/projected/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-kube-api-access-6cnf9\") pod \"whisker-6488b96975-7fwjv\" (UID: \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\") " pod="calico-system/whisker-6488b96975-7fwjv" Jul 12 00:07:51.276233 kubelet[3316]: I0712 00:07:51.276194 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcspw\" (UniqueName: \"kubernetes.io/projected/31fec83d-971d-44e2-913b-a79f6e564b60-kube-api-access-jcspw\") pod \"calico-apiserver-b47f68fd8-s5g4q\" (UID: \"31fec83d-971d-44e2-913b-a79f6e564b60\") " pod="calico-apiserver/calico-apiserver-b47f68fd8-s5g4q" Jul 12 00:07:51.276233 kubelet[3316]: I0712 00:07:51.276211 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxn7\" (UniqueName: \"kubernetes.io/projected/63578b12-3c8e-4f1d-8853-0022228cafa4-kube-api-access-8hxn7\") pod \"coredns-7c65d6cfc9-blcvm\" (UID: \"63578b12-3c8e-4f1d-8853-0022228cafa4\") " pod="kube-system/coredns-7c65d6cfc9-blcvm" Jul 12 00:07:51.276233 kubelet[3316]: I0712 00:07:51.276226 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcj6k\" (UniqueName: \"kubernetes.io/projected/040330d5-fac5-4f81-94d3-dbbc52011ddb-kube-api-access-rcj6k\") pod \"coredns-7c65d6cfc9-58ltv\" (UID: \"040330d5-fac5-4f81-94d3-dbbc52011ddb\") " pod="kube-system/coredns-7c65d6cfc9-58ltv" Jul 12 00:07:51.276538 kubelet[3316]: I0712 00:07:51.276245 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31fec83d-971d-44e2-913b-a79f6e564b60-calico-apiserver-certs\") pod \"calico-apiserver-b47f68fd8-s5g4q\" (UID: \"31fec83d-971d-44e2-913b-a79f6e564b60\") " pod="calico-apiserver/calico-apiserver-b47f68fd8-s5g4q" Jul 12 00:07:51.276538 kubelet[3316]: I0712 00:07:51.276271 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/040330d5-fac5-4f81-94d3-dbbc52011ddb-config-volume\") pod \"coredns-7c65d6cfc9-58ltv\" (UID: \"040330d5-fac5-4f81-94d3-dbbc52011ddb\") " pod="kube-system/coredns-7c65d6cfc9-58ltv" Jul 12 00:07:51.276538 kubelet[3316]: I0712 00:07:51.276289 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63578b12-3c8e-4f1d-8853-0022228cafa4-config-volume\") pod \"coredns-7c65d6cfc9-blcvm\" (UID: \"63578b12-3c8e-4f1d-8853-0022228cafa4\") " pod="kube-system/coredns-7c65d6cfc9-blcvm" Jul 12 00:07:51.276538 kubelet[3316]: I0712 00:07:51.276308 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90652f12-5f24-4f92-ba31-8e8fc442c377-calico-apiserver-certs\") pod \"calico-apiserver-b47f68fd8-ttd9w\" (UID: \"90652f12-5f24-4f92-ba31-8e8fc442c377\") " pod="calico-apiserver/calico-apiserver-b47f68fd8-ttd9w" Jul 12 00:07:52.071373 containerd[1906]: time="2025-07-12T00:07:52.071257773Z" level=info msg="shim disconnected" id=e2ab34990d9dd81af77dc198c127285b350d3d61d6f99a7212dcf799aa8d1167 namespace=k8s.io Jul 12 00:07:52.071373 containerd[1906]: time="2025-07-12T00:07:52.071315293Z" level=warning msg="cleaning up after shim disconnected" id=e2ab34990d9dd81af77dc198c127285b350d3d61d6f99a7212dcf799aa8d1167 namespace=k8s.io Jul 12 00:07:52.071373 containerd[1906]: time="2025-07-12T00:07:52.071325573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:07:52.156971 containerd[1906]: time="2025-07-12T00:07:52.156938530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-blcvm,Uid:63578b12-3c8e-4f1d-8853-0022228cafa4,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:52.170914 containerd[1906]: time="2025-07-12T00:07:52.170626603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58ltv,Uid:040330d5-fac5-4f81-94d3-dbbc52011ddb,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:52.179205 containerd[1906]: time="2025-07-12T00:07:52.179113078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5649f74b8d-dpdd7,Uid:d03f5dbf-aed9-446d-a8ba-7e995cb04351,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:52.180972 containerd[1906]: time="2025-07-12T00:07:52.180917237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jddn9,Uid:a8289963-d035-4bbc-9efb-a53e9428a42b,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:52.183342 containerd[1906]: time="2025-07-12T00:07:52.183034796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6488b96975-7fwjv,Uid:2cd7c4c1-a14d-4383-b3e7-826ad1471bcf,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:52.183342 containerd[1906]: time="2025-07-12T00:07:52.183169116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47f68fd8-s5g4q,Uid:31fec83d-971d-44e2-913b-a79f6e564b60,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:07:52.197972 containerd[1906]: time="2025-07-12T00:07:52.197915269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47f68fd8-ttd9w,Uid:90652f12-5f24-4f92-ba31-8e8fc442c377,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:07:52.344767 containerd[1906]: time="2025-07-12T00:07:52.343945315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wgr4,Uid:b68f90c7-c121-47a4-9328-c85559bf7c5c,Namespace:calico-system,Attempt:0,}" Jul 12 00:07:52.358300 containerd[1906]: time="2025-07-12T00:07:52.358204828Z" level=error msg="Failed to destroy network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.359363 containerd[1906]: time="2025-07-12T00:07:52.359325668Z" level=error msg="encountered an error cleaning up failed sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.359449 containerd[1906]: time="2025-07-12T00:07:52.359383548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-blcvm,Uid:63578b12-3c8e-4f1d-8853-0022228cafa4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.359625 kubelet[3316]: E0712 00:07:52.359593 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.360427 kubelet[3316]: E0712 00:07:52.359984 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-blcvm" Jul 12 00:07:52.360427 kubelet[3316]: E0712 00:07:52.360032 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-blcvm" Jul 12 00:07:52.360427 kubelet[3316]: E0712 00:07:52.360079 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-blcvm_kube-system(63578b12-3c8e-4f1d-8853-0022228cafa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-blcvm_kube-system(63578b12-3c8e-4f1d-8853-0022228cafa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-blcvm" podUID="63578b12-3c8e-4f1d-8853-0022228cafa4" Jul 12 00:07:52.518458 containerd[1906]: time="2025-07-12T00:07:52.518194628Z" level=error msg="Failed to destroy network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.530405 containerd[1906]: time="2025-07-12T00:07:52.528040703Z" level=error msg="encountered an error cleaning up failed sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.530405 containerd[1906]: time="2025-07-12T00:07:52.528392022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58ltv,Uid:040330d5-fac5-4f81-94d3-dbbc52011ddb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.530405 containerd[1906]: time="2025-07-12T00:07:52.529846702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:07:52.532079 kubelet[3316]: E0712 00:07:52.531905 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.532264 kubelet[3316]: E0712 00:07:52.532236 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-58ltv" Jul 12 00:07:52.532405 kubelet[3316]: E0712 00:07:52.532265 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-58ltv" Jul 12 00:07:52.534106 kubelet[3316]: E0712 00:07:52.532425 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-58ltv_kube-system(040330d5-fac5-4f81-94d3-dbbc52011ddb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-58ltv_kube-system(040330d5-fac5-4f81-94d3-dbbc52011ddb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-58ltv" podUID="040330d5-fac5-4f81-94d3-dbbc52011ddb" Jul 12 00:07:52.534106 kubelet[3316]: I0712 00:07:52.531974 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:07:52.538100 containerd[1906]: time="2025-07-12T00:07:52.537675338Z" level=info msg="StopPodSandbox for \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\"" Jul 12 00:07:52.539508 containerd[1906]: time="2025-07-12T00:07:52.539248897Z" level=info msg="Ensure that sandbox 1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376 in task-service has been cleanup successfully" Jul 12 00:07:52.563063 containerd[1906]: time="2025-07-12T00:07:52.563017365Z" level=error msg="Failed to destroy network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.564219 containerd[1906]: time="2025-07-12T00:07:52.564186364Z" level=error msg="encountered an error cleaning up failed sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.564384 containerd[1906]: time="2025-07-12T00:07:52.564363044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47f68fd8-s5g4q,Uid:31fec83d-971d-44e2-913b-a79f6e564b60,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.570874 kubelet[3316]: E0712 00:07:52.570752 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.570874 kubelet[3316]: E0712 00:07:52.570818 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b47f68fd8-s5g4q" Jul 12 00:07:52.570874 kubelet[3316]: E0712 00:07:52.570838 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b47f68fd8-s5g4q" Jul 12 00:07:52.571034 kubelet[3316]: E0712 00:07:52.570873 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b47f68fd8-s5g4q_calico-apiserver(31fec83d-971d-44e2-913b-a79f6e564b60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b47f68fd8-s5g4q_calico-apiserver(31fec83d-971d-44e2-913b-a79f6e564b60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b47f68fd8-s5g4q" podUID="31fec83d-971d-44e2-913b-a79f6e564b60" Jul 12 00:07:52.616319 containerd[1906]: time="2025-07-12T00:07:52.616198098Z" level=error msg="Failed to destroy network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.617969 containerd[1906]: time="2025-07-12T00:07:52.617515218Z" level=error msg="encountered an error cleaning up failed sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.618146 containerd[1906]: time="2025-07-12T00:07:52.618069697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5649f74b8d-dpdd7,Uid:d03f5dbf-aed9-446d-a8ba-7e995cb04351,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.618853 kubelet[3316]: E0712 00:07:52.618814 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.618926 kubelet[3316]: E0712 00:07:52.618872 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5649f74b8d-dpdd7" Jul 12 00:07:52.618926 kubelet[3316]: E0712 00:07:52.618893 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5649f74b8d-dpdd7" Jul 12 00:07:52.619021 kubelet[3316]: E0712 00:07:52.618937 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5649f74b8d-dpdd7_calico-system(d03f5dbf-aed9-446d-a8ba-7e995cb04351)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5649f74b8d-dpdd7_calico-system(d03f5dbf-aed9-446d-a8ba-7e995cb04351)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5649f74b8d-dpdd7" podUID="d03f5dbf-aed9-446d-a8ba-7e995cb04351" Jul 12 00:07:52.644115 containerd[1906]: time="2025-07-12T00:07:52.643852364Z" level=error msg="Failed to destroy network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.646048 containerd[1906]: time="2025-07-12T00:07:52.645885323Z" level=error msg="encountered an error cleaning up failed sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.646048 containerd[1906]: time="2025-07-12T00:07:52.645952523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47f68fd8-ttd9w,Uid:90652f12-5f24-4f92-ba31-8e8fc442c377,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.646708 kubelet[3316]: E0712 00:07:52.646377 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.646708 kubelet[3316]: E0712 00:07:52.646448 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b47f68fd8-ttd9w" Jul 12 00:07:52.646708 kubelet[3316]: E0712 00:07:52.646467 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b47f68fd8-ttd9w" Jul 12 00:07:52.648320 kubelet[3316]: E0712 00:07:52.646591 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b47f68fd8-ttd9w_calico-apiserver(90652f12-5f24-4f92-ba31-8e8fc442c377)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b47f68fd8-ttd9w_calico-apiserver(90652f12-5f24-4f92-ba31-8e8fc442c377)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b47f68fd8-ttd9w" podUID="90652f12-5f24-4f92-ba31-8e8fc442c377" Jul 12 00:07:52.657381 containerd[1906]: time="2025-07-12T00:07:52.657242878Z" level=error msg="Failed to destroy network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.657863 containerd[1906]: time="2025-07-12T00:07:52.657712957Z" level=error msg="encountered an error cleaning up failed sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.657863 containerd[1906]: time="2025-07-12T00:07:52.657768637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6488b96975-7fwjv,Uid:2cd7c4c1-a14d-4383-b3e7-826ad1471bcf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.659142 kubelet[3316]: E0712 00:07:52.658105 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.659142 kubelet[3316]: E0712 00:07:52.658164 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6488b96975-7fwjv" Jul 12 00:07:52.659142 kubelet[3316]: E0712 00:07:52.658184 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6488b96975-7fwjv" Jul 12 00:07:52.659295 kubelet[3316]: E0712 00:07:52.658225 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6488b96975-7fwjv_calico-system(2cd7c4c1-a14d-4383-b3e7-826ad1471bcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6488b96975-7fwjv_calico-system(2cd7c4c1-a14d-4383-b3e7-826ad1471bcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6488b96975-7fwjv" podUID="2cd7c4c1-a14d-4383-b3e7-826ad1471bcf" Jul 12 00:07:52.661323 containerd[1906]: time="2025-07-12T00:07:52.661276556Z" level=error msg="Failed to destroy network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.661698 containerd[1906]: time="2025-07-12T00:07:52.661668435Z" level=error msg="encountered an error cleaning up failed sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.661754 containerd[1906]: time="2025-07-12T00:07:52.661717475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wgr4,Uid:b68f90c7-c121-47a4-9328-c85559bf7c5c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.662211 kubelet[3316]: E0712 00:07:52.661960 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.662211 kubelet[3316]: E0712 00:07:52.662020 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wgr4" Jul 12 00:07:52.662211 kubelet[3316]: E0712 00:07:52.662048 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wgr4" Jul 12 00:07:52.662331 kubelet[3316]: E0712 00:07:52.662083 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7wgr4_calico-system(b68f90c7-c121-47a4-9328-c85559bf7c5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7wgr4_calico-system(b68f90c7-c121-47a4-9328-c85559bf7c5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wgr4" podUID="b68f90c7-c121-47a4-9328-c85559bf7c5c" Jul 12 00:07:52.662491 containerd[1906]: time="2025-07-12T00:07:52.662463555Z" level=error msg="StopPodSandbox for \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\" failed" error="failed to destroy network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.662705 kubelet[3316]: E0712 00:07:52.662668 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:07:52.662757 kubelet[3316]: E0712 00:07:52.662714 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376"} Jul 12 00:07:52.662784 kubelet[3316]: E0712 00:07:52.662765 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63578b12-3c8e-4f1d-8853-0022228cafa4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:52.662827 kubelet[3316]: E0712 00:07:52.662786 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63578b12-3c8e-4f1d-8853-0022228cafa4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-blcvm" podUID="63578b12-3c8e-4f1d-8853-0022228cafa4" Jul 12 00:07:52.663625 containerd[1906]: time="2025-07-12T00:07:52.663573954Z" level=error msg="Failed to destroy network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.663851 containerd[1906]: time="2025-07-12T00:07:52.663822994Z" level=error msg="encountered an error cleaning up failed sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.663912 containerd[1906]: time="2025-07-12T00:07:52.663867634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jddn9,Uid:a8289963-d035-4bbc-9efb-a53e9428a42b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.664032 kubelet[3316]: E0712 00:07:52.664002 3316 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:52.664169 kubelet[3316]: E0712 00:07:52.664044 3316 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jddn9" Jul 12 00:07:52.664169 kubelet[3316]: E0712 00:07:52.664060 3316 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jddn9" Jul 12 00:07:52.664169 kubelet[3316]: E0712 00:07:52.664101 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-jddn9_calico-system(a8289963-d035-4bbc-9efb-a53e9428a42b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-jddn9_calico-system(a8289963-d035-4bbc-9efb-a53e9428a42b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-jddn9" podUID="a8289963-d035-4bbc-9efb-a53e9428a42b" Jul 12 00:07:53.206922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b-shm.mount: Deactivated successfully. Jul 12 00:07:53.207126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376-shm.mount: Deactivated successfully. Jul 12 00:07:53.536604 kubelet[3316]: I0712 00:07:53.534625 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:07:53.536932 containerd[1906]: time="2025-07-12T00:07:53.535336115Z" level=info msg="StopPodSandbox for \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\"" Jul 12 00:07:53.536932 containerd[1906]: time="2025-07-12T00:07:53.535507115Z" level=info msg="Ensure that sandbox 225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8 in task-service has been cleanup successfully" Jul 12 00:07:53.538245 kubelet[3316]: I0712 00:07:53.538217 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:07:53.538786 containerd[1906]: time="2025-07-12T00:07:53.538746194Z" level=info msg="StopPodSandbox for \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\"" Jul 12 00:07:53.539116 containerd[1906]: time="2025-07-12T00:07:53.538898154Z" level=info msg="Ensure that sandbox 3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7 in task-service has been cleanup successfully" Jul 12 00:07:53.557418 kubelet[3316]: I0712 00:07:53.557380 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:07:53.559987 containerd[1906]: time="2025-07-12T00:07:53.559914943Z" level=info msg="StopPodSandbox for \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\"" Jul 12 00:07:53.560122 containerd[1906]: time="2025-07-12T00:07:53.560084623Z" level=info msg="Ensure that sandbox b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b in task-service has been cleanup successfully" Jul 12 00:07:53.563203 kubelet[3316]: I0712 00:07:53.563052 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:07:53.563726 containerd[1906]: time="2025-07-12T00:07:53.563653941Z" level=info msg="StopPodSandbox for \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\"" Jul 12 00:07:53.564617 containerd[1906]: time="2025-07-12T00:07:53.564508021Z" level=info msg="Ensure that sandbox d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5 in task-service has been cleanup successfully" Jul 12 00:07:53.568425 kubelet[3316]: I0712 00:07:53.568397 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:07:53.570137 containerd[1906]: time="2025-07-12T00:07:53.569810698Z" level=info msg="StopPodSandbox for \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\"" Jul 12 00:07:53.570209 containerd[1906]: time="2025-07-12T00:07:53.570081818Z" level=info msg="Ensure that sandbox f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210 in task-service has been cleanup successfully" Jul 12 00:07:53.571359 kubelet[3316]: I0712 00:07:53.571336 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:07:53.573333 containerd[1906]: time="2025-07-12T00:07:53.573291736Z" level=info msg="StopPodSandbox for \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\"" Jul 12 00:07:53.575301 containerd[1906]: time="2025-07-12T00:07:53.575264655Z" level=info msg="Ensure that sandbox dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf in task-service has been cleanup successfully" Jul 12 00:07:53.579803 kubelet[3316]: I0712 00:07:53.579773 3316 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:07:53.580889 containerd[1906]: time="2025-07-12T00:07:53.580557573Z" level=info msg="StopPodSandbox for \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\"" Jul 12 00:07:53.583599 containerd[1906]: time="2025-07-12T00:07:53.583562691Z" level=info msg="Ensure that sandbox ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce in task-service has been cleanup successfully" Jul 12 00:07:53.628504 containerd[1906]: time="2025-07-12T00:07:53.628355989Z" level=error msg="StopPodSandbox for \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\" failed" error="failed to destroy network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.628748 kubelet[3316]: E0712 00:07:53.628708 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:07:53.628797 kubelet[3316]: E0712 00:07:53.628757 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8"} Jul 12 00:07:53.628832 kubelet[3316]: E0712 00:07:53.628794 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8289963-d035-4bbc-9efb-a53e9428a42b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.628832 kubelet[3316]: E0712 00:07:53.628817 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8289963-d035-4bbc-9efb-a53e9428a42b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-jddn9" podUID="a8289963-d035-4bbc-9efb-a53e9428a42b" Jul 12 00:07:53.642669 containerd[1906]: time="2025-07-12T00:07:53.641977462Z" level=error msg="StopPodSandbox for \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\" failed" error="failed to destroy network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.642803 kubelet[3316]: E0712 00:07:53.642248 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:07:53.642803 kubelet[3316]: E0712 00:07:53.642298 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5"} Jul 12 00:07:53.642803 kubelet[3316]: E0712 00:07:53.642336 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31fec83d-971d-44e2-913b-a79f6e564b60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.642803 kubelet[3316]: E0712 00:07:53.642356 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31fec83d-971d-44e2-913b-a79f6e564b60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b47f68fd8-s5g4q" podUID="31fec83d-971d-44e2-913b-a79f6e564b60" Jul 12 00:07:53.644239 containerd[1906]: time="2025-07-12T00:07:53.644195181Z" level=error msg="StopPodSandbox for \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\" failed" error="failed to destroy network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.644614 kubelet[3316]: E0712 00:07:53.644496 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:07:53.644614 kubelet[3316]: E0712 00:07:53.644540 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce"} Jul 12 00:07:53.644614 kubelet[3316]: E0712 00:07:53.644571 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b68f90c7-c121-47a4-9328-c85559bf7c5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.644614 kubelet[3316]: E0712 00:07:53.644589 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b68f90c7-c121-47a4-9328-c85559bf7c5c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wgr4" podUID="b68f90c7-c121-47a4-9328-c85559bf7c5c" Jul 12 00:07:53.657812 containerd[1906]: time="2025-07-12T00:07:53.657741334Z" level=error msg="StopPodSandbox for \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\" failed" error="failed to destroy network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.658122 kubelet[3316]: E0712 00:07:53.657994 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:07:53.658122 kubelet[3316]: E0712 00:07:53.658046 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf"} Jul 12 00:07:53.658239 kubelet[3316]: E0712 00:07:53.658213 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90652f12-5f24-4f92-ba31-8e8fc442c377\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.658299 kubelet[3316]: E0712 00:07:53.658252 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90652f12-5f24-4f92-ba31-8e8fc442c377\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b47f68fd8-ttd9w" podUID="90652f12-5f24-4f92-ba31-8e8fc442c377" Jul 12 00:07:53.658907 containerd[1906]: time="2025-07-12T00:07:53.658798933Z" level=error msg="StopPodSandbox for \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\" failed" error="failed to destroy network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.659167 kubelet[3316]: E0712 00:07:53.658985 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:07:53.659305 kubelet[3316]: E0712 00:07:53.659282 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7"} Jul 12 00:07:53.659342 kubelet[3316]: E0712 00:07:53.659321 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d03f5dbf-aed9-446d-a8ba-7e995cb04351\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.659397 kubelet[3316]: E0712 00:07:53.659353 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d03f5dbf-aed9-446d-a8ba-7e995cb04351\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5649f74b8d-dpdd7" podUID="d03f5dbf-aed9-446d-a8ba-7e995cb04351" Jul 12 00:07:53.662754 containerd[1906]: time="2025-07-12T00:07:53.662699051Z" level=error msg="StopPodSandbox for \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\" failed" error="failed to destroy network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.663540 kubelet[3316]: E0712 00:07:53.663178 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:07:53.663540 kubelet[3316]: E0712 00:07:53.663216 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210"} Jul 12 00:07:53.663540 kubelet[3316]: E0712 00:07:53.663240 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.663540 kubelet[3316]: E0712 00:07:53.663258 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6488b96975-7fwjv" podUID="2cd7c4c1-a14d-4383-b3e7-826ad1471bcf" Jul 12 00:07:53.663831 containerd[1906]: time="2025-07-12T00:07:53.663800931Z" level=error msg="StopPodSandbox for \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\" failed" error="failed to destroy network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:07:53.664028 kubelet[3316]: E0712 00:07:53.663997 3316 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:07:53.664084 kubelet[3316]: E0712 00:07:53.664034 3316 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b"} Jul 12 00:07:53.664084 kubelet[3316]: E0712 00:07:53.664059 3316 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"040330d5-fac5-4f81-94d3-dbbc52011ddb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:07:53.664172 kubelet[3316]: E0712 00:07:53.664079 3316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"040330d5-fac5-4f81-94d3-dbbc52011ddb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-58ltv" podUID="040330d5-fac5-4f81-94d3-dbbc52011ddb" Jul 12 00:07:58.802353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972832376.mount: Deactivated successfully. Jul 12 00:07:58.862492 containerd[1906]: time="2025-07-12T00:07:58.862439235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:58.865722 containerd[1906]: time="2025-07-12T00:07:58.865582434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:07:58.872002 containerd[1906]: time="2025-07-12T00:07:58.871947071Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:58.877991 containerd[1906]: time="2025-07-12T00:07:58.877944788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:58.878697 containerd[1906]: time="2025-07-12T00:07:58.878560147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.348682245s" Jul 12 00:07:58.878697 containerd[1906]: time="2025-07-12T00:07:58.878587507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:07:58.903452 containerd[1906]: time="2025-07-12T00:07:58.903422295Z" level=info msg="CreateContainer within sandbox \"dd5c9eaaf7da8c5e68d78d5d36390155dfe4d74497f21c97719fad66dd3347ee\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:07:58.952908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484993683.mount: Deactivated successfully. Jul 12 00:07:58.972595 containerd[1906]: time="2025-07-12T00:07:58.972548380Z" level=info msg="CreateContainer within sandbox \"dd5c9eaaf7da8c5e68d78d5d36390155dfe4d74497f21c97719fad66dd3347ee\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fe6278fef3a1f5af8ff4b3dd72b11e936c2b67a6114bb50a91e6590c5bc3a9ab\"" Jul 12 00:07:58.973403 containerd[1906]: time="2025-07-12T00:07:58.973371620Z" level=info msg="StartContainer for \"fe6278fef3a1f5af8ff4b3dd72b11e936c2b67a6114bb50a91e6590c5bc3a9ab\"" Jul 12 00:07:59.033492 containerd[1906]: time="2025-07-12T00:07:59.033441150Z" level=info msg="StartContainer for \"fe6278fef3a1f5af8ff4b3dd72b11e936c2b67a6114bb50a91e6590c5bc3a9ab\" returns successfully" Jul 12 00:07:59.332311 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:07:59.332457 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:07:59.691526 kubelet[3316]: I0712 00:07:59.691359 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2d49c" podStartSLOduration=1.469433891 podStartE2EDuration="19.691342543s" podCreationTimestamp="2025-07-12 00:07:40 +0000 UTC" firstStartedPulling="2025-07-12 00:07:40.657602055 +0000 UTC m=+25.399733161" lastFinishedPulling="2025-07-12 00:07:58.879510707 +0000 UTC m=+43.621641813" observedRunningTime="2025-07-12 00:07:59.690813383 +0000 UTC m=+44.432944489" watchObservedRunningTime="2025-07-12 00:07:59.691342543 +0000 UTC m=+44.433473649" Jul 12 00:07:59.765766 containerd[1906]: time="2025-07-12T00:07:59.765358546Z" level=info msg="StopPodSandbox for \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\"" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.898 [INFO][4512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.898 [INFO][4512] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" iface="eth0" netns="/var/run/netns/cni-c167d0d4-0816-10fc-2a80-7d785b815efa" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.898 [INFO][4512] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" iface="eth0" netns="/var/run/netns/cni-c167d0d4-0816-10fc-2a80-7d785b815efa" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.898 [INFO][4512] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" iface="eth0" netns="/var/run/netns/cni-c167d0d4-0816-10fc-2a80-7d785b815efa" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.898 [INFO][4512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.898 [INFO][4512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.930 [INFO][4520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.931 [INFO][4520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.931 [INFO][4520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.939 [WARNING][4520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.939 [INFO][4520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.945 [INFO][4520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:07:59.955325 containerd[1906]: 2025-07-12 00:07:59.951 [INFO][4512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:07:59.958133 containerd[1906]: time="2025-07-12T00:07:59.955545971Z" level=info msg="TearDown network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\" successfully" Jul 12 00:07:59.958133 containerd[1906]: time="2025-07-12T00:07:59.955577691Z" level=info msg="StopPodSandbox for \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\" returns successfully" Jul 12 00:07:59.962363 systemd[1]: run-netns-cni\x2dc167d0d4\x2d0816\x2d10fc\x2d2a80\x2d7d785b815efa.mount: Deactivated successfully. Jul 12 00:08:00.033791 kubelet[3316]: I0712 00:08:00.033752 3316 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-whisker-ca-bundle\") pod \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\" (UID: \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\") " Jul 12 00:08:00.034147 kubelet[3316]: I0712 00:08:00.034123 3316 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2cd7c4c1-a14d-4383-b3e7-826ad1471bcf" (UID: "2cd7c4c1-a14d-4383-b3e7-826ad1471bcf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:08:00.034209 kubelet[3316]: I0712 00:08:00.034197 3316 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cnf9\" (UniqueName: \"kubernetes.io/projected/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-kube-api-access-6cnf9\") pod \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\" (UID: \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\") " Jul 12 00:08:00.034240 kubelet[3316]: I0712 00:08:00.034219 3316 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-whisker-backend-key-pair\") pod \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\" (UID: \"2cd7c4c1-a14d-4383-b3e7-826ad1471bcf\") " Jul 12 00:08:00.034295 kubelet[3316]: I0712 00:08:00.034276 3316 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-whisker-ca-bundle\") on node \"ci-4081.3.4-n-0fb9ec6aad\" DevicePath \"\"" Jul 12 00:08:00.043350 kubelet[3316]: I0712 00:08:00.043291 3316 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2cd7c4c1-a14d-4383-b3e7-826ad1471bcf" (UID: "2cd7c4c1-a14d-4383-b3e7-826ad1471bcf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:08:00.043469 kubelet[3316]: I0712 00:08:00.043419 3316 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-kube-api-access-6cnf9" (OuterVolumeSpecName: "kube-api-access-6cnf9") pod "2cd7c4c1-a14d-4383-b3e7-826ad1471bcf" (UID: "2cd7c4c1-a14d-4383-b3e7-826ad1471bcf"). InnerVolumeSpecName "kube-api-access-6cnf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:08:00.044023 systemd[1]: var-lib-kubelet-pods-2cd7c4c1\x2da14d\x2d4383\x2db3e7\x2d826ad1471bcf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6cnf9.mount: Deactivated successfully. Jul 12 00:08:00.044184 systemd[1]: var-lib-kubelet-pods-2cd7c4c1\x2da14d\x2d4383\x2db3e7\x2d826ad1471bcf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:08:00.134862 kubelet[3316]: I0712 00:08:00.134816 3316 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cnf9\" (UniqueName: \"kubernetes.io/projected/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-kube-api-access-6cnf9\") on node \"ci-4081.3.4-n-0fb9ec6aad\" DevicePath \"\"" Jul 12 00:08:00.134862 kubelet[3316]: I0712 00:08:00.134849 3316 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf-whisker-backend-key-pair\") on node \"ci-4081.3.4-n-0fb9ec6aad\" DevicePath \"\"" Jul 12 00:08:00.738321 kubelet[3316]: I0712 00:08:00.738274 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faf8ad53-2042-489f-a7b3-c83d935709f0-whisker-ca-bundle\") pod \"whisker-5f5d778749-9w4fd\" (UID: \"faf8ad53-2042-489f-a7b3-c83d935709f0\") " pod="calico-system/whisker-5f5d778749-9w4fd" Jul 12 00:08:00.738321 kubelet[3316]: I0712 00:08:00.738326 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/faf8ad53-2042-489f-a7b3-c83d935709f0-whisker-backend-key-pair\") pod \"whisker-5f5d778749-9w4fd\" (UID: \"faf8ad53-2042-489f-a7b3-c83d935709f0\") " pod="calico-system/whisker-5f5d778749-9w4fd" Jul 12 00:08:00.738744 kubelet[3316]: I0712 00:08:00.738353 3316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdp6\" (UniqueName: \"kubernetes.io/projected/faf8ad53-2042-489f-a7b3-c83d935709f0-kube-api-access-ccdp6\") pod \"whisker-5f5d778749-9w4fd\" (UID: \"faf8ad53-2042-489f-a7b3-c83d935709f0\") " pod="calico-system/whisker-5f5d778749-9w4fd" Jul 12 00:08:00.974276 containerd[1906]: time="2025-07-12T00:08:00.974161305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5d778749-9w4fd,Uid:faf8ad53-2042-489f-a7b3-c83d935709f0,Namespace:calico-system,Attempt:0,}" Jul 12 00:08:01.204284 systemd-networkd[1392]: calica9280cf906: Link UP Jul 12 00:08:01.204552 systemd-networkd[1392]: calica9280cf906: Gained carrier Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.085 [INFO][4634] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.102 [INFO][4634] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0 whisker-5f5d778749- calico-system faf8ad53-2042-489f-a7b3-c83d935709f0 917 0 2025-07-12 00:08:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f5d778749 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.4-n-0fb9ec6aad whisker-5f5d778749-9w4fd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calica9280cf906 [] [] }} ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Namespace="calico-system" Pod="whisker-5f5d778749-9w4fd" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.103 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Namespace="calico-system" Pod="whisker-5f5d778749-9w4fd" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.136 [INFO][4659] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" HandleID="k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.137 [INFO][4659] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" HandleID="k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-0fb9ec6aad", "pod":"whisker-5f5d778749-9w4fd", "timestamp":"2025-07-12 00:08:01.136860744 +0000 UTC"}, Hostname:"ci-4081.3.4-n-0fb9ec6aad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.137 [INFO][4659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.137 [INFO][4659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.137 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-0fb9ec6aad' Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.145 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.152 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.156 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.158 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.160 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.160 [INFO][4659] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.161 [INFO][4659] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970 Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.172 [INFO][4659] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.180 [INFO][4659] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.193/26] block=192.168.2.192/26 handle="k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.180 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.193/26] handle="k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.180 [INFO][4659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:01.239708 containerd[1906]: 2025-07-12 00:08:01.180 [INFO][4659] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.193/26] IPv6=[] ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" HandleID="k8s-pod-network.b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" Jul 12 00:08:01.244013 containerd[1906]: 2025-07-12 00:08:01.183 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Namespace="calico-system" Pod="whisker-5f5d778749-9w4fd" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0", GenerateName:"whisker-5f5d778749-", Namespace:"calico-system", SelfLink:"", UID:"faf8ad53-2042-489f-a7b3-c83d935709f0", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f5d778749", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"", Pod:"whisker-5f5d778749-9w4fd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica9280cf906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:01.244013 containerd[1906]: 2025-07-12 00:08:01.183 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.193/32] ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Namespace="calico-system" Pod="whisker-5f5d778749-9w4fd" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" Jul 12 00:08:01.244013 containerd[1906]: 2025-07-12 00:08:01.183 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica9280cf906 ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Namespace="calico-system" Pod="whisker-5f5d778749-9w4fd" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" Jul 12 00:08:01.244013 containerd[1906]: 2025-07-12 00:08:01.198 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Namespace="calico-system" Pod="whisker-5f5d778749-9w4fd" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" Jul 12 00:08:01.244013 containerd[1906]: 2025-07-12 00:08:01.210 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Namespace="calico-system" Pod="whisker-5f5d778749-9w4fd" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0", GenerateName:"whisker-5f5d778749-", Namespace:"calico-system", SelfLink:"", UID:"faf8ad53-2042-489f-a7b3-c83d935709f0", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 8, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f5d778749", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970", Pod:"whisker-5f5d778749-9w4fd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica9280cf906", MAC:"6a:27:aa:c6:24:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:01.244013 containerd[1906]: 2025-07-12 00:08:01.232 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970" Namespace="calico-system" Pod="whisker-5f5d778749-9w4fd" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--5f5d778749--9w4fd-eth0" Jul 12 00:08:01.296535 containerd[1906]: time="2025-07-12T00:08:01.295840745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:01.296535 containerd[1906]: time="2025-07-12T00:08:01.296411264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:01.299559 containerd[1906]: time="2025-07-12T00:08:01.299118903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:01.300100 containerd[1906]: time="2025-07-12T00:08:01.299991382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:01.400335 kubelet[3316]: I0712 00:08:01.400281 3316 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd7c4c1-a14d-4383-b3e7-826ad1471bcf" path="/var/lib/kubelet/pods/2cd7c4c1-a14d-4383-b3e7-826ad1471bcf/volumes" Jul 12 00:08:01.503600 containerd[1906]: time="2025-07-12T00:08:01.503561161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f5d778749-9w4fd,Uid:faf8ad53-2042-489f-a7b3-c83d935709f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970\"" Jul 12 00:08:01.505923 containerd[1906]: time="2025-07-12T00:08:01.505880560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:08:03.206252 systemd-networkd[1392]: calica9280cf906: Gained IPv6LL Jul 12 00:08:03.550656 kubelet[3316]: I0712 00:08:03.550156 3316 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:03.723373 containerd[1906]: time="2025-07-12T00:08:03.723309617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:03.732057 containerd[1906]: time="2025-07-12T00:08:03.731996332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:08:03.737531 containerd[1906]: time="2025-07-12T00:08:03.737479170Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:03.747315 containerd[1906]: time="2025-07-12T00:08:03.747262165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:03.751259 containerd[1906]: time="2025-07-12T00:08:03.751210443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 2.245286683s" Jul 12 00:08:03.751259 containerd[1906]: time="2025-07-12T00:08:03.751255683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:08:03.757111 containerd[1906]: time="2025-07-12T00:08:03.755848801Z" level=info msg="CreateContainer within sandbox \"b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:08:03.824360 containerd[1906]: time="2025-07-12T00:08:03.824155167Z" level=info msg="CreateContainer within sandbox \"b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ffb22bbf9c6e7b27b0d27bb8ac197c9d838bb67efa694964944e430bfe618f04\"" Jul 12 00:08:03.826293 containerd[1906]: time="2025-07-12T00:08:03.826173086Z" level=info msg="StartContainer for \"ffb22bbf9c6e7b27b0d27bb8ac197c9d838bb67efa694964944e430bfe618f04\"" Jul 12 00:08:03.920224 containerd[1906]: time="2025-07-12T00:08:03.919198959Z" level=info msg="StartContainer for \"ffb22bbf9c6e7b27b0d27bb8ac197c9d838bb67efa694964944e430bfe618f04\" returns successfully" Jul 12 00:08:03.922101 containerd[1906]: time="2025-07-12T00:08:03.921637958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:08:04.341829 containerd[1906]: time="2025-07-12T00:08:04.341549029Z" level=info msg="StopPodSandbox for \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\"" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.390 [INFO][4837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.390 [INFO][4837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" iface="eth0" netns="/var/run/netns/cni-1f3201aa-dff3-102a-6ce7-d49ade2fcd6a" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.390 [INFO][4837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" iface="eth0" netns="/var/run/netns/cni-1f3201aa-dff3-102a-6ce7-d49ade2fcd6a" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.390 [INFO][4837] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" iface="eth0" netns="/var/run/netns/cni-1f3201aa-dff3-102a-6ce7-d49ade2fcd6a" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.390 [INFO][4837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.392 [INFO][4837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.416 [INFO][4844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.417 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.417 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.428 [WARNING][4844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.428 [INFO][4844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.429 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:04.433974 containerd[1906]: 2025-07-12 00:08:04.430 [INFO][4837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:04.435477 containerd[1906]: time="2025-07-12T00:08:04.435225822Z" level=info msg="TearDown network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\" successfully" Jul 12 00:08:04.435477 containerd[1906]: time="2025-07-12T00:08:04.435256982Z" level=info msg="StopPodSandbox for \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\" returns successfully" Jul 12 00:08:04.437173 containerd[1906]: time="2025-07-12T00:08:04.436036422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-blcvm,Uid:63578b12-3c8e-4f1d-8853-0022228cafa4,Namespace:kube-system,Attempt:1,}" Jul 12 00:08:04.437930 systemd[1]: run-netns-cni\x2d1f3201aa\x2ddff3\x2d102a\x2d6ce7\x2dd49ade2fcd6a.mount: Deactivated successfully. Jul 12 00:08:04.548143 kernel: bpftool[4877]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:08:04.642336 systemd-networkd[1392]: calid8469dd7741: Link UP Jul 12 00:08:04.643232 systemd-networkd[1392]: calid8469dd7741: Gained carrier Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.567 [INFO][4860] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0 coredns-7c65d6cfc9- kube-system 63578b12-3c8e-4f1d-8853-0022228cafa4 943 0 2025-07-12 00:07:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-n-0fb9ec6aad coredns-7c65d6cfc9-blcvm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid8469dd7741 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Namespace="kube-system" Pod="coredns-7c65d6cfc9-blcvm" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.569 [INFO][4860] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Namespace="kube-system" Pod="coredns-7c65d6cfc9-blcvm" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.592 [INFO][4881] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" HandleID="k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.592 [INFO][4881] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" HandleID="k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-n-0fb9ec6aad", "pod":"coredns-7c65d6cfc9-blcvm", "timestamp":"2025-07-12 00:08:04.592800264 +0000 UTC"}, Hostname:"ci-4081.3.4-n-0fb9ec6aad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.592 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.593 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.593 [INFO][4881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-0fb9ec6aad' Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.602 [INFO][4881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.611 [INFO][4881] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.615 [INFO][4881] ipam/ipam.go 511: Trying affinity for 192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.617 [INFO][4881] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.619 [INFO][4881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.619 [INFO][4881] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.621 [INFO][4881] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278 Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.630 [INFO][4881] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.637 [INFO][4881] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.194/26] block=192.168.2.192/26 handle="k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.637 [INFO][4881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.194/26] handle="k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.637 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:04.663982 containerd[1906]: 2025-07-12 00:08:04.637 [INFO][4881] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.194/26] IPv6=[] ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" HandleID="k8s-pod-network.483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.664556 containerd[1906]: 2025-07-12 00:08:04.639 [INFO][4860] cni-plugin/k8s.go 418: Populated endpoint ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Namespace="kube-system" Pod="coredns-7c65d6cfc9-blcvm" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"63578b12-3c8e-4f1d-8853-0022228cafa4", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"", Pod:"coredns-7c65d6cfc9-blcvm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid8469dd7741", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:04.664556 containerd[1906]: 2025-07-12 00:08:04.639 [INFO][4860] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.194/32] ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Namespace="kube-system" Pod="coredns-7c65d6cfc9-blcvm" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.664556 containerd[1906]: 2025-07-12 00:08:04.639 [INFO][4860] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8469dd7741 ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Namespace="kube-system" Pod="coredns-7c65d6cfc9-blcvm" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.664556 containerd[1906]: 2025-07-12 00:08:04.643 [INFO][4860] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Namespace="kube-system" Pod="coredns-7c65d6cfc9-blcvm" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.664556 containerd[1906]: 2025-07-12 00:08:04.643 [INFO][4860] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Namespace="kube-system" Pod="coredns-7c65d6cfc9-blcvm" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"63578b12-3c8e-4f1d-8853-0022228cafa4", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278", Pod:"coredns-7c65d6cfc9-blcvm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid8469dd7741", MAC:"56:08:b1:59:00:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:04.664773 containerd[1906]: 2025-07-12 00:08:04.662 [INFO][4860] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278" Namespace="kube-system" Pod="coredns-7c65d6cfc9-blcvm" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:04.702622 containerd[1906]: time="2025-07-12T00:08:04.702513609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:04.702764 containerd[1906]: time="2025-07-12T00:08:04.702658209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:04.702842 containerd[1906]: time="2025-07-12T00:08:04.702795609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:04.703891 containerd[1906]: time="2025-07-12T00:08:04.703798329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:04.769325 containerd[1906]: time="2025-07-12T00:08:04.769282496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-blcvm,Uid:63578b12-3c8e-4f1d-8853-0022228cafa4,Namespace:kube-system,Attempt:1,} returns sandbox id \"483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278\"" Jul 12 00:08:04.779292 containerd[1906]: time="2025-07-12T00:08:04.779235011Z" level=info msg="CreateContainer within sandbox \"483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:04.810577 systemd-networkd[1392]: vxlan.calico: Link UP Jul 12 00:08:04.810584 systemd-networkd[1392]: vxlan.calico: Gained carrier Jul 12 00:08:04.841799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588408393.mount: Deactivated successfully. Jul 12 00:08:04.865480 containerd[1906]: time="2025-07-12T00:08:04.865393888Z" level=info msg="CreateContainer within sandbox \"483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ce64f3a126a47c66e975f5504d14e8c2f1f19be86e8ea8ce8b5fc28ae89101b\"" Jul 12 00:08:04.866008 containerd[1906]: time="2025-07-12T00:08:04.865982728Z" level=info msg="StartContainer for \"9ce64f3a126a47c66e975f5504d14e8c2f1f19be86e8ea8ce8b5fc28ae89101b\"" Jul 12 00:08:04.930303 containerd[1906]: time="2025-07-12T00:08:04.930077576Z" level=info msg="StartContainer for \"9ce64f3a126a47c66e975f5504d14e8c2f1f19be86e8ea8ce8b5fc28ae89101b\" returns successfully" Jul 12 00:08:05.345435 containerd[1906]: time="2025-07-12T00:08:05.344386330Z" level=info msg="StopPodSandbox for \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\"" Jul 12 00:08:05.347652 containerd[1906]: time="2025-07-12T00:08:05.346849729Z" level=info msg="StopPodSandbox for \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\"" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.448 [INFO][5062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.449 [INFO][5062] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" iface="eth0" netns="/var/run/netns/cni-f8a937d4-2685-c6d2-b76d-b6da9dd53e48" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.449 [INFO][5062] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" iface="eth0" netns="/var/run/netns/cni-f8a937d4-2685-c6d2-b76d-b6da9dd53e48" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.450 [INFO][5062] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" iface="eth0" netns="/var/run/netns/cni-f8a937d4-2685-c6d2-b76d-b6da9dd53e48" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.450 [INFO][5062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.450 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.507 [INFO][5078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.507 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.507 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.516 [WARNING][5078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.516 [INFO][5078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.518 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:05.523820 containerd[1906]: 2025-07-12 00:08:05.520 [INFO][5062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:05.526141 containerd[1906]: time="2025-07-12T00:08:05.526109480Z" level=info msg="TearDown network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\" successfully" Jul 12 00:08:05.526240 containerd[1906]: time="2025-07-12T00:08:05.526226000Z" level=info msg="StopPodSandbox for \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\" returns successfully" Jul 12 00:08:05.531865 containerd[1906]: time="2025-07-12T00:08:05.531822997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jddn9,Uid:a8289963-d035-4bbc-9efb-a53e9428a42b,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.462 [INFO][5061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.464 [INFO][5061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" iface="eth0" netns="/var/run/netns/cni-8632a68c-e17f-9d03-0713-eca703fe3dab" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.465 [INFO][5061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" iface="eth0" netns="/var/run/netns/cni-8632a68c-e17f-9d03-0713-eca703fe3dab" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.466 [INFO][5061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" iface="eth0" netns="/var/run/netns/cni-8632a68c-e17f-9d03-0713-eca703fe3dab" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.466 [INFO][5061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.466 [INFO][5061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.506 [INFO][5083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.508 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.518 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.530 [WARNING][5083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.530 [INFO][5083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.532 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:05.538765 containerd[1906]: 2025-07-12 00:08:05.536 [INFO][5061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:05.539815 containerd[1906]: time="2025-07-12T00:08:05.539463313Z" level=info msg="TearDown network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\" successfully" Jul 12 00:08:05.539815 containerd[1906]: time="2025-07-12T00:08:05.539490633Z" level=info msg="StopPodSandbox for \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\" returns successfully" Jul 12 00:08:05.540705 containerd[1906]: time="2025-07-12T00:08:05.540669872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5649f74b8d-dpdd7,Uid:d03f5dbf-aed9-446d-a8ba-7e995cb04351,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:05.642278 kubelet[3316]: I0712 00:08:05.641739 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-blcvm" podStartSLOduration=43.641720542 podStartE2EDuration="43.641720542s" podCreationTimestamp="2025-07-12 00:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:05.641099182 +0000 UTC m=+50.383230328" watchObservedRunningTime="2025-07-12 00:08:05.641720542 +0000 UTC m=+50.383851768" Jul 12 00:08:05.804772 systemd[1]: run-netns-cni\x2df8a937d4\x2d2685\x2dc6d2\x2db76d\x2db6da9dd53e48.mount: Deactivated successfully. Jul 12 00:08:05.805542 systemd[1]: run-netns-cni\x2d8632a68c\x2de17f\x2d9d03\x2d0713\x2deca703fe3dab.mount: Deactivated successfully. Jul 12 00:08:05.835754 systemd-networkd[1392]: cali09b9ed1c00a: Link UP Jul 12 00:08:05.836363 systemd-networkd[1392]: cali09b9ed1c00a: Gained carrier Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.718 [INFO][5091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0 calico-kube-controllers-5649f74b8d- calico-system d03f5dbf-aed9-446d-a8ba-7e995cb04351 956 0 2025-07-12 00:07:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5649f74b8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.4-n-0fb9ec6aad calico-kube-controllers-5649f74b8d-dpdd7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali09b9ed1c00a [] [] }} ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Namespace="calico-system" Pod="calico-kube-controllers-5649f74b8d-dpdd7" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.719 [INFO][5091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Namespace="calico-system" Pod="calico-kube-controllers-5649f74b8d-dpdd7" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.768 [INFO][5117] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" HandleID="k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.768 [INFO][5117] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" HandleID="k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000272f40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-0fb9ec6aad", "pod":"calico-kube-controllers-5649f74b8d-dpdd7", "timestamp":"2025-07-12 00:08:05.768119399 +0000 UTC"}, Hostname:"ci-4081.3.4-n-0fb9ec6aad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.769 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.769 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.769 [INFO][5117] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-0fb9ec6aad' Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.779 [INFO][5117] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.784 [INFO][5117] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.788 [INFO][5117] ipam/ipam.go 511: Trying affinity for 192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.791 [INFO][5117] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.793 [INFO][5117] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.793 [INFO][5117] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.794 [INFO][5117] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999 Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.810 [INFO][5117] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.823 [INFO][5117] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.195/26] block=192.168.2.192/26 handle="k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.823 [INFO][5117] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.195/26] handle="k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.823 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:05.860128 containerd[1906]: 2025-07-12 00:08:05.823 [INFO][5117] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.195/26] IPv6=[] ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" HandleID="k8s-pod-network.ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.861259 containerd[1906]: 2025-07-12 00:08:05.828 [INFO][5091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Namespace="calico-system" Pod="calico-kube-controllers-5649f74b8d-dpdd7" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0", GenerateName:"calico-kube-controllers-5649f74b8d-", Namespace:"calico-system", SelfLink:"", UID:"d03f5dbf-aed9-446d-a8ba-7e995cb04351", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5649f74b8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"", Pod:"calico-kube-controllers-5649f74b8d-dpdd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09b9ed1c00a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:05.861259 containerd[1906]: 2025-07-12 00:08:05.828 [INFO][5091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.195/32] ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Namespace="calico-system" Pod="calico-kube-controllers-5649f74b8d-dpdd7" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.861259 containerd[1906]: 2025-07-12 00:08:05.828 [INFO][5091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09b9ed1c00a ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Namespace="calico-system" Pod="calico-kube-controllers-5649f74b8d-dpdd7" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.861259 containerd[1906]: 2025-07-12 00:08:05.839 [INFO][5091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Namespace="calico-system" Pod="calico-kube-controllers-5649f74b8d-dpdd7" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.861259 containerd[1906]: 2025-07-12 00:08:05.841 [INFO][5091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Namespace="calico-system" Pod="calico-kube-controllers-5649f74b8d-dpdd7" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0", GenerateName:"calico-kube-controllers-5649f74b8d-", Namespace:"calico-system", SelfLink:"", UID:"d03f5dbf-aed9-446d-a8ba-7e995cb04351", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5649f74b8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999", Pod:"calico-kube-controllers-5649f74b8d-dpdd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09b9ed1c00a", MAC:"fe:4f:2e:e6:1d:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:05.861259 containerd[1906]: 2025-07-12 00:08:05.858 [INFO][5091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999" Namespace="calico-system" Pod="calico-kube-controllers-5649f74b8d-dpdd7" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:05.915859 containerd[1906]: time="2025-07-12T00:08:05.915684846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:05.916063 containerd[1906]: time="2025-07-12T00:08:05.915758526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:05.916063 containerd[1906]: time="2025-07-12T00:08:05.915772326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:05.916063 containerd[1906]: time="2025-07-12T00:08:05.915876246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:05.943561 systemd-networkd[1392]: calif0bc4d3738a: Link UP Jul 12 00:08:05.945344 systemd-networkd[1392]: calif0bc4d3738a: Gained carrier Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.725 [INFO][5105] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0 goldmane-58fd7646b9- calico-system a8289963-d035-4bbc-9efb-a53e9428a42b 955 0 2025-07-12 00:07:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.4-n-0fb9ec6aad goldmane-58fd7646b9-jddn9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif0bc4d3738a [] [] }} ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Namespace="calico-system" Pod="goldmane-58fd7646b9-jddn9" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.726 [INFO][5105] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Namespace="calico-system" Pod="goldmane-58fd7646b9-jddn9" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.769 [INFO][5120] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" HandleID="k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.769 [INFO][5120] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" HandleID="k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-0fb9ec6aad", "pod":"goldmane-58fd7646b9-jddn9", "timestamp":"2025-07-12 00:08:05.769548519 +0000 UTC"}, Hostname:"ci-4081.3.4-n-0fb9ec6aad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.769 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.824 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.824 [INFO][5120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-0fb9ec6aad' Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.881 [INFO][5120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.889 [INFO][5120] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.897 [INFO][5120] ipam/ipam.go 511: Trying affinity for 192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.901 [INFO][5120] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.903 [INFO][5120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.904 [INFO][5120] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.905 [INFO][5120] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0 Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.914 [INFO][5120] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.932 [INFO][5120] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.196/26] block=192.168.2.192/26 handle="k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.932 [INFO][5120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.196/26] handle="k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.932 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:05.989297 containerd[1906]: 2025-07-12 00:08:05.932 [INFO][5120] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.196/26] IPv6=[] ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" HandleID="k8s-pod-network.924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.990548 containerd[1906]: 2025-07-12 00:08:05.939 [INFO][5105] cni-plugin/k8s.go 418: Populated endpoint ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Namespace="calico-system" Pod="goldmane-58fd7646b9-jddn9" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a8289963-d035-4bbc-9efb-a53e9428a42b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"", Pod:"goldmane-58fd7646b9-jddn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0bc4d3738a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:05.990548 containerd[1906]: 2025-07-12 00:08:05.939 [INFO][5105] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.196/32] ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Namespace="calico-system" Pod="goldmane-58fd7646b9-jddn9" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.990548 containerd[1906]: 2025-07-12 00:08:05.939 [INFO][5105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0bc4d3738a ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Namespace="calico-system" Pod="goldmane-58fd7646b9-jddn9" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.990548 containerd[1906]: 2025-07-12 00:08:05.946 [INFO][5105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Namespace="calico-system" Pod="goldmane-58fd7646b9-jddn9" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:05.990548 containerd[1906]: 2025-07-12 00:08:05.949 [INFO][5105] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Namespace="calico-system" Pod="goldmane-58fd7646b9-jddn9" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a8289963-d035-4bbc-9efb-a53e9428a42b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0", Pod:"goldmane-58fd7646b9-jddn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0bc4d3738a", MAC:"b2:a2:e0:85:ae:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:05.990548 containerd[1906]: 2025-07-12 00:08:05.982 [INFO][5105] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0" Namespace="calico-system" Pod="goldmane-58fd7646b9-jddn9" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:06.027678 containerd[1906]: time="2025-07-12T00:08:06.025549751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:06.027678 containerd[1906]: time="2025-07-12T00:08:06.025611751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:06.027678 containerd[1906]: time="2025-07-12T00:08:06.025629271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:06.027678 containerd[1906]: time="2025-07-12T00:08:06.025748871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:06.065623 containerd[1906]: time="2025-07-12T00:08:06.065577251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5649f74b8d-dpdd7,Uid:d03f5dbf-aed9-446d-a8ba-7e995cb04351,Namespace:calico-system,Attempt:1,} returns sandbox id \"ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999\"" Jul 12 00:08:06.087237 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Jul 12 00:08:06.127320 containerd[1906]: time="2025-07-12T00:08:06.126158941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jddn9,Uid:a8289963-d035-4bbc-9efb-a53e9428a42b,Namespace:calico-system,Attempt:1,} returns sandbox id \"924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0\"" Jul 12 00:08:06.328230 containerd[1906]: time="2025-07-12T00:08:06.328179841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:06.332249 containerd[1906]: time="2025-07-12T00:08:06.332215199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:08:06.340658 containerd[1906]: time="2025-07-12T00:08:06.340621274Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:06.343559 containerd[1906]: time="2025-07-12T00:08:06.343141593Z" level=info msg="StopPodSandbox for \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\"" Jul 12 00:08:06.346688 containerd[1906]: time="2025-07-12T00:08:06.346644871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:06.347464 containerd[1906]: time="2025-07-12T00:08:06.347433151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 2.425662633s" Jul 12 00:08:06.347512 containerd[1906]: time="2025-07-12T00:08:06.347469111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:08:06.347911 containerd[1906]: time="2025-07-12T00:08:06.347885831Z" level=info msg="StopPodSandbox for \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\"" Jul 12 00:08:06.366470 containerd[1906]: time="2025-07-12T00:08:06.365162862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:08:06.368688 containerd[1906]: time="2025-07-12T00:08:06.368457661Z" level=info msg="CreateContainer within sandbox \"b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.431 [INFO][5253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.432 [INFO][5253] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" iface="eth0" netns="/var/run/netns/cni-2de3e93f-20bd-6100-aabb-9e04270c0931" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.433 [INFO][5253] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" iface="eth0" netns="/var/run/netns/cni-2de3e93f-20bd-6100-aabb-9e04270c0931" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.433 [INFO][5253] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" iface="eth0" netns="/var/run/netns/cni-2de3e93f-20bd-6100-aabb-9e04270c0931" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.434 [INFO][5253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.434 [INFO][5253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.457 [INFO][5275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.457 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.457 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.466 [WARNING][5275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.466 [INFO][5275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.468 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:06.471675 containerd[1906]: 2025-07-12 00:08:06.470 [INFO][5253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:06.472742 containerd[1906]: time="2025-07-12T00:08:06.472358969Z" level=info msg="TearDown network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\" successfully" Jul 12 00:08:06.472742 containerd[1906]: time="2025-07-12T00:08:06.472399289Z" level=info msg="StopPodSandbox for \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\" returns successfully" Jul 12 00:08:06.473731 containerd[1906]: time="2025-07-12T00:08:06.473623248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wgr4,Uid:b68f90c7-c121-47a4-9328-c85559bf7c5c,Namespace:calico-system,Attempt:1,}" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.426 [INFO][5258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.427 [INFO][5258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" iface="eth0" netns="/var/run/netns/cni-5cdcde08-c0b9-70a4-b324-38ab29e398ab" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.427 [INFO][5258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" iface="eth0" netns="/var/run/netns/cni-5cdcde08-c0b9-70a4-b324-38ab29e398ab" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.428 [INFO][5258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" iface="eth0" netns="/var/run/netns/cni-5cdcde08-c0b9-70a4-b324-38ab29e398ab" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.428 [INFO][5258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.428 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.457 [INFO][5270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.458 [INFO][5270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.468 [INFO][5270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.477 [WARNING][5270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.477 [INFO][5270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.479 [INFO][5270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:06.482159 containerd[1906]: 2025-07-12 00:08:06.480 [INFO][5258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:06.482919 containerd[1906]: time="2025-07-12T00:08:06.482691684Z" level=info msg="TearDown network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\" successfully" Jul 12 00:08:06.482919 containerd[1906]: time="2025-07-12T00:08:06.482722444Z" level=info msg="StopPodSandbox for \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\" returns successfully" Jul 12 00:08:06.483523 containerd[1906]: time="2025-07-12T00:08:06.483497404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47f68fd8-ttd9w,Uid:90652f12-5f24-4f92-ba31-8e8fc442c377,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:06.726310 systemd-networkd[1392]: calid8469dd7741: Gained IPv6LL Jul 12 00:08:06.800664 systemd[1]: run-containerd-runc-k8s.io-924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0-runc.MGo8Ok.mount: Deactivated successfully. Jul 12 00:08:06.800823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527812327.mount: Deactivated successfully. Jul 12 00:08:06.800913 systemd[1]: run-netns-cni\x2d2de3e93f\x2d20bd\x2d6100\x2daabb\x2d9e04270c0931.mount: Deactivated successfully. Jul 12 00:08:06.800987 systemd[1]: run-netns-cni\x2d5cdcde08\x2dc0b9\x2d70a4\x2db324\x2d38ab29e398ab.mount: Deactivated successfully. Jul 12 00:08:06.807595 containerd[1906]: time="2025-07-12T00:08:06.807552446Z" level=info msg="CreateContainer within sandbox \"b48377141090c36a6f59f9c502087005923105ae753f7a14952d9695c3f43970\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"19d5cffdfb3616b25f6bc6d013e41b8d4b3c7da62745cf1f8edc6e8a55f6c73a\"" Jul 12 00:08:06.808737 containerd[1906]: time="2025-07-12T00:08:06.808476165Z" level=info msg="StartContainer for \"19d5cffdfb3616b25f6bc6d013e41b8d4b3c7da62745cf1f8edc6e8a55f6c73a\"" Jul 12 00:08:06.891602 containerd[1906]: time="2025-07-12T00:08:06.891223205Z" level=info msg="StartContainer for \"19d5cffdfb3616b25f6bc6d013e41b8d4b3c7da62745cf1f8edc6e8a55f6c73a\" returns successfully" Jul 12 00:08:07.028237 systemd-networkd[1392]: cali4a6761bde42: Link UP Jul 12 00:08:07.032691 systemd-networkd[1392]: cali4a6761bde42: Gained carrier Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.941 [INFO][5318] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0 csi-node-driver- calico-system b68f90c7-c121-47a4-9328-c85559bf7c5c 972 0 2025-07-12 00:07:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.4-n-0fb9ec6aad csi-node-driver-7wgr4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4a6761bde42 [] [] }} ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Namespace="calico-system" Pod="csi-node-driver-7wgr4" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.941 [INFO][5318] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Namespace="calico-system" Pod="csi-node-driver-7wgr4" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.978 [INFO][5344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" HandleID="k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.978 [INFO][5344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" HandleID="k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.4-n-0fb9ec6aad", "pod":"csi-node-driver-7wgr4", "timestamp":"2025-07-12 00:08:06.978526722 +0000 UTC"}, Hostname:"ci-4081.3.4-n-0fb9ec6aad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.978 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.978 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.978 [INFO][5344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-0fb9ec6aad' Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.990 [INFO][5344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.995 [INFO][5344] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:06.999 [INFO][5344] ipam/ipam.go 511: Trying affinity for 192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.000 [INFO][5344] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.002 [INFO][5344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.002 [INFO][5344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.004 [INFO][5344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7 Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.013 [INFO][5344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.021 [INFO][5344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.197/26] block=192.168.2.192/26 handle="k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.021 [INFO][5344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.197/26] handle="k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.021 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:07.061374 containerd[1906]: 2025-07-12 00:08:07.022 [INFO][5344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.197/26] IPv6=[] ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" HandleID="k8s-pod-network.a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:07.064498 containerd[1906]: 2025-07-12 00:08:07.024 [INFO][5318] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Namespace="calico-system" Pod="csi-node-driver-7wgr4" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b68f90c7-c121-47a4-9328-c85559bf7c5c", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"", Pod:"csi-node-driver-7wgr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a6761bde42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:07.064498 containerd[1906]: 2025-07-12 00:08:07.024 [INFO][5318] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.197/32] ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Namespace="calico-system" Pod="csi-node-driver-7wgr4" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:07.064498 containerd[1906]: 2025-07-12 00:08:07.024 [INFO][5318] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a6761bde42 ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Namespace="calico-system" Pod="csi-node-driver-7wgr4" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:07.064498 containerd[1906]: 2025-07-12 00:08:07.030 [INFO][5318] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Namespace="calico-system" Pod="csi-node-driver-7wgr4" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:07.064498 containerd[1906]: 2025-07-12 00:08:07.032 [INFO][5318] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Namespace="calico-system" Pod="csi-node-driver-7wgr4" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b68f90c7-c121-47a4-9328-c85559bf7c5c", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7", Pod:"csi-node-driver-7wgr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a6761bde42", MAC:"8e:08:8b:cc:27:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:07.064498 containerd[1906]: 2025-07-12 00:08:07.057 [INFO][5318] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7" Namespace="calico-system" Pod="csi-node-driver-7wgr4" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:07.084649 containerd[1906]: time="2025-07-12T00:08:07.084447511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:07.084649 containerd[1906]: time="2025-07-12T00:08:07.084602551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:07.084889 containerd[1906]: time="2025-07-12T00:08:07.084687951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:07.085274 containerd[1906]: time="2025-07-12T00:08:07.085168110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:07.125494 containerd[1906]: time="2025-07-12T00:08:07.125447251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wgr4,Uid:b68f90c7-c121-47a4-9328-c85559bf7c5c,Namespace:calico-system,Attempt:1,} returns sandbox id \"a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7\"" Jul 12 00:08:07.143763 systemd-networkd[1392]: cali68119f7e6c8: Link UP Jul 12 00:08:07.143956 systemd-networkd[1392]: cali68119f7e6c8: Gained carrier Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:06.955 [INFO][5327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0 calico-apiserver-b47f68fd8- calico-apiserver 90652f12-5f24-4f92-ba31-8e8fc442c377 971 0 2025-07-12 00:07:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b47f68fd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-n-0fb9ec6aad calico-apiserver-b47f68fd8-ttd9w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68119f7e6c8 [] [] }} ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-ttd9w" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:06.955 [INFO][5327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-ttd9w" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:06.991 [INFO][5349] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" HandleID="k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:06.991 [INFO][5349] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" HandleID="k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-n-0fb9ec6aad", "pod":"calico-apiserver-b47f68fd8-ttd9w", "timestamp":"2025-07-12 00:08:06.991429236 +0000 UTC"}, Hostname:"ci-4081.3.4-n-0fb9ec6aad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:06.991 [INFO][5349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.022 [INFO][5349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.022 [INFO][5349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-0fb9ec6aad' Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.100 [INFO][5349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.105 [INFO][5349] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.110 [INFO][5349] ipam/ipam.go 511: Trying affinity for 192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.112 [INFO][5349] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.116 [INFO][5349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.116 [INFO][5349] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.118 [INFO][5349] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23 Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.126 [INFO][5349] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.136 [INFO][5349] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.198/26] block=192.168.2.192/26 handle="k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.137 [INFO][5349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.198/26] handle="k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.137 [INFO][5349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:07.163281 containerd[1906]: 2025-07-12 00:08:07.137 [INFO][5349] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.198/26] IPv6=[] ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" HandleID="k8s-pod-network.35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:07.163829 containerd[1906]: 2025-07-12 00:08:07.139 [INFO][5327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-ttd9w" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0", GenerateName:"calico-apiserver-b47f68fd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"90652f12-5f24-4f92-ba31-8e8fc442c377", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47f68fd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"", Pod:"calico-apiserver-b47f68fd8-ttd9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68119f7e6c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:07.163829 containerd[1906]: 2025-07-12 00:08:07.139 [INFO][5327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.198/32] ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-ttd9w" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:07.163829 containerd[1906]: 2025-07-12 00:08:07.139 [INFO][5327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68119f7e6c8 ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-ttd9w" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:07.163829 containerd[1906]: 2025-07-12 00:08:07.143 [INFO][5327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-ttd9w" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:07.163829 containerd[1906]: 2025-07-12 00:08:07.144 [INFO][5327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-ttd9w" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0", GenerateName:"calico-apiserver-b47f68fd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"90652f12-5f24-4f92-ba31-8e8fc442c377", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47f68fd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23", Pod:"calico-apiserver-b47f68fd8-ttd9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68119f7e6c8", MAC:"f6:49:a1:9e:3c:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:07.163829 containerd[1906]: 2025-07-12 00:08:07.160 [INFO][5327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-ttd9w" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:07.194569 containerd[1906]: time="2025-07-12T00:08:07.194070057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:07.194569 containerd[1906]: time="2025-07-12T00:08:07.194146577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:07.194569 containerd[1906]: time="2025-07-12T00:08:07.194162017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:07.195077 containerd[1906]: time="2025-07-12T00:08:07.195001817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:07.251165 containerd[1906]: time="2025-07-12T00:08:07.250287150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47f68fd8-ttd9w,Uid:90652f12-5f24-4f92-ba31-8e8fc442c377,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23\"" Jul 12 00:08:07.342733 containerd[1906]: time="2025-07-12T00:08:07.342610985Z" level=info msg="StopPodSandbox for \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\"" Jul 12 00:08:07.366239 systemd-networkd[1392]: calif0bc4d3738a: Gained IPv6LL Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.389 [INFO][5466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.389 [INFO][5466] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" iface="eth0" netns="/var/run/netns/cni-f40fcdbe-6560-4832-fdc2-d5435eded354" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.389 [INFO][5466] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" iface="eth0" netns="/var/run/netns/cni-f40fcdbe-6560-4832-fdc2-d5435eded354" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.389 [INFO][5466] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" iface="eth0" netns="/var/run/netns/cni-f40fcdbe-6560-4832-fdc2-d5435eded354" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.389 [INFO][5466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.389 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.411 [INFO][5473] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.411 [INFO][5473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.411 [INFO][5473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.428 [WARNING][5473] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.429 [INFO][5473] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.432 [INFO][5473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:07.435588 containerd[1906]: 2025-07-12 00:08:07.434 [INFO][5466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:07.436207 containerd[1906]: time="2025-07-12T00:08:07.436058939Z" level=info msg="TearDown network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\" successfully" Jul 12 00:08:07.436207 containerd[1906]: time="2025-07-12T00:08:07.436109499Z" level=info msg="StopPodSandbox for \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\" returns successfully" Jul 12 00:08:07.436900 containerd[1906]: time="2025-07-12T00:08:07.436784259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47f68fd8-s5g4q,Uid:31fec83d-971d-44e2-913b-a79f6e564b60,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:08:07.592924 systemd-networkd[1392]: calic28a73b6fda: Link UP Jul 12 00:08:07.594068 systemd-networkd[1392]: calic28a73b6fda: Gained carrier Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.512 [INFO][5481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0 calico-apiserver-b47f68fd8- calico-apiserver 31fec83d-971d-44e2-913b-a79f6e564b60 993 0 2025-07-12 00:07:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b47f68fd8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.4-n-0fb9ec6aad calico-apiserver-b47f68fd8-s5g4q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic28a73b6fda [] [] }} ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-s5g4q" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.512 [INFO][5481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-s5g4q" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.539 [INFO][5492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" HandleID="k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.539 [INFO][5492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" HandleID="k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.4-n-0fb9ec6aad", "pod":"calico-apiserver-b47f68fd8-s5g4q", "timestamp":"2025-07-12 00:08:07.539590769 +0000 UTC"}, Hostname:"ci-4081.3.4-n-0fb9ec6aad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.539 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.539 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.539 [INFO][5492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-0fb9ec6aad' Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.549 [INFO][5492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.553 [INFO][5492] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.558 [INFO][5492] ipam/ipam.go 511: Trying affinity for 192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.560 [INFO][5492] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.562 [INFO][5492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.563 [INFO][5492] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.565 [INFO][5492] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38 Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.573 [INFO][5492] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.588 [INFO][5492] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.199/26] block=192.168.2.192/26 handle="k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.588 [INFO][5492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.199/26] handle="k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.588 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:07.616178 containerd[1906]: 2025-07-12 00:08:07.588 [INFO][5492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.199/26] IPv6=[] ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" HandleID="k8s-pod-network.7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.617053 containerd[1906]: 2025-07-12 00:08:07.590 [INFO][5481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-s5g4q" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0", GenerateName:"calico-apiserver-b47f68fd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"31fec83d-971d-44e2-913b-a79f6e564b60", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47f68fd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"", Pod:"calico-apiserver-b47f68fd8-s5g4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic28a73b6fda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:07.617053 containerd[1906]: 2025-07-12 00:08:07.590 [INFO][5481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.199/32] ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-s5g4q" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.617053 containerd[1906]: 2025-07-12 00:08:07.590 [INFO][5481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic28a73b6fda ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-s5g4q" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.617053 containerd[1906]: 2025-07-12 00:08:07.595 [INFO][5481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-s5g4q" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.617053 containerd[1906]: 2025-07-12 00:08:07.597 [INFO][5481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-s5g4q" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0", GenerateName:"calico-apiserver-b47f68fd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"31fec83d-971d-44e2-913b-a79f6e564b60", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47f68fd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38", Pod:"calico-apiserver-b47f68fd8-s5g4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic28a73b6fda", MAC:"3e:b5:6f:8e:14:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:07.617053 containerd[1906]: 2025-07-12 00:08:07.612 [INFO][5481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38" Namespace="calico-apiserver" Pod="calico-apiserver-b47f68fd8-s5g4q" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:07.656575 containerd[1906]: time="2025-07-12T00:08:07.655446553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:07.656575 containerd[1906]: time="2025-07-12T00:08:07.655512633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:07.656575 containerd[1906]: time="2025-07-12T00:08:07.655527673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:07.656575 containerd[1906]: time="2025-07-12T00:08:07.655624192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:07.686312 systemd-networkd[1392]: cali09b9ed1c00a: Gained IPv6LL Jul 12 00:08:07.708697 containerd[1906]: time="2025-07-12T00:08:07.708656407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b47f68fd8-s5g4q,Uid:31fec83d-971d-44e2-913b-a79f6e564b60,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38\"" Jul 12 00:08:07.809443 systemd[1]: run-netns-cni\x2df40fcdbe\x2d6560\x2d4832\x2dfdc2\x2dd5435eded354.mount: Deactivated successfully. Jul 12 00:08:08.454517 systemd-networkd[1392]: cali4a6761bde42: Gained IPv6LL Jul 12 00:08:08.710263 systemd-networkd[1392]: calic28a73b6fda: Gained IPv6LL Jul 12 00:08:08.839497 systemd-networkd[1392]: cali68119f7e6c8: Gained IPv6LL Jul 12 00:08:09.344099 containerd[1906]: time="2025-07-12T00:08:09.343832490Z" level=info msg="StopPodSandbox for \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\"" Jul 12 00:08:09.404021 kubelet[3316]: I0712 00:08:09.402842 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f5d778749-9w4fd" podStartSLOduration=4.543194518 podStartE2EDuration="9.402822701s" podCreationTimestamp="2025-07-12 00:08:00 +0000 UTC" firstStartedPulling="2025-07-12 00:08:01.50492216 +0000 UTC m=+46.247053226" lastFinishedPulling="2025-07-12 00:08:06.364550263 +0000 UTC m=+51.106681409" observedRunningTime="2025-07-12 00:08:07.653253474 +0000 UTC m=+52.395384580" watchObservedRunningTime="2025-07-12 00:08:09.402822701 +0000 UTC m=+54.144953807" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.404 [INFO][5565] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.404 [INFO][5565] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" iface="eth0" netns="/var/run/netns/cni-9d74590c-b415-3712-2862-47c8b634b6d6" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.405 [INFO][5565] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" iface="eth0" netns="/var/run/netns/cni-9d74590c-b415-3712-2862-47c8b634b6d6" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.405 [INFO][5565] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" iface="eth0" netns="/var/run/netns/cni-9d74590c-b415-3712-2862-47c8b634b6d6" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.405 [INFO][5565] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.405 [INFO][5565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.429 [INFO][5573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.430 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.430 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.444 [WARNING][5573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.444 [INFO][5573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.445 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:09.449637 containerd[1906]: 2025-07-12 00:08:09.447 [INFO][5565] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:09.449637 containerd[1906]: time="2025-07-12T00:08:09.449147838Z" level=info msg="TearDown network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\" successfully" Jul 12 00:08:09.449637 containerd[1906]: time="2025-07-12T00:08:09.449175318Z" level=info msg="StopPodSandbox for \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\" returns successfully" Jul 12 00:08:09.450109 containerd[1906]: time="2025-07-12T00:08:09.449864438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58ltv,Uid:040330d5-fac5-4f81-94d3-dbbc52011ddb,Namespace:kube-system,Attempt:1,}" Jul 12 00:08:09.453896 systemd[1]: run-netns-cni\x2d9d74590c\x2db415\x2d3712\x2d2862\x2d47c8b634b6d6.mount: Deactivated successfully. Jul 12 00:08:09.571173 containerd[1906]: time="2025-07-12T00:08:09.571117339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:09.586124 containerd[1906]: time="2025-07-12T00:08:09.586054412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:08:09.598208 containerd[1906]: time="2025-07-12T00:08:09.598002246Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:09.617856 containerd[1906]: time="2025-07-12T00:08:09.617581276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.251253614s" Jul 12 00:08:09.617856 containerd[1906]: time="2025-07-12T00:08:09.617622516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:08:09.618723 containerd[1906]: time="2025-07-12T00:08:09.618533356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:09.622714 containerd[1906]: time="2025-07-12T00:08:09.622306114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:08:09.641494 containerd[1906]: time="2025-07-12T00:08:09.641462625Z" level=info msg="CreateContainer within sandbox \"ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:08:09.698311 containerd[1906]: time="2025-07-12T00:08:09.698257037Z" level=info msg="CreateContainer within sandbox \"ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b4e546c13ab1f9867a80cb9b8a51b42e98b12c317ee44282820236088f7903a4\"" Jul 12 00:08:09.700310 containerd[1906]: time="2025-07-12T00:08:09.700240996Z" level=info msg="StartContainer for \"b4e546c13ab1f9867a80cb9b8a51b42e98b12c317ee44282820236088f7903a4\"" Jul 12 00:08:09.756479 systemd-networkd[1392]: calib21c7bf4855: Link UP Jul 12 00:08:09.758307 systemd-networkd[1392]: calib21c7bf4855: Gained carrier Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.664 [INFO][5580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0 coredns-7c65d6cfc9- kube-system 040330d5-fac5-4f81-94d3-dbbc52011ddb 1008 0 2025-07-12 00:07:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.4-n-0fb9ec6aad coredns-7c65d6cfc9-58ltv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib21c7bf4855 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58ltv" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.665 [INFO][5580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58ltv" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.694 [INFO][5596] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" HandleID="k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.694 [INFO][5596] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" HandleID="k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.4-n-0fb9ec6aad", "pod":"coredns-7c65d6cfc9-58ltv", "timestamp":"2025-07-12 00:08:09.694611279 +0000 UTC"}, Hostname:"ci-4081.3.4-n-0fb9ec6aad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.694 [INFO][5596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.695 [INFO][5596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.695 [INFO][5596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.4-n-0fb9ec6aad' Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.708 [INFO][5596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.716 [INFO][5596] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.721 [INFO][5596] ipam/ipam.go 511: Trying affinity for 192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.724 [INFO][5596] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.727 [INFO][5596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.727 [INFO][5596] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.730 [INFO][5596] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929 Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.737 [INFO][5596] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.747 [INFO][5596] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.200/26] block=192.168.2.192/26 handle="k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.747 [INFO][5596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.200/26] handle="k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" host="ci-4081.3.4-n-0fb9ec6aad" Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.747 [INFO][5596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:09.777563 containerd[1906]: 2025-07-12 00:08:09.748 [INFO][5596] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.200/26] IPv6=[] ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" HandleID="k8s-pod-network.ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.779760 containerd[1906]: 2025-07-12 00:08:09.752 [INFO][5580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58ltv" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"040330d5-fac5-4f81-94d3-dbbc52011ddb", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"", Pod:"coredns-7c65d6cfc9-58ltv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib21c7bf4855", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:09.779760 containerd[1906]: 2025-07-12 00:08:09.752 [INFO][5580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.200/32] ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58ltv" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.779760 containerd[1906]: 2025-07-12 00:08:09.752 [INFO][5580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib21c7bf4855 ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58ltv" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.779760 containerd[1906]: 2025-07-12 00:08:09.760 [INFO][5580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58ltv" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.779760 containerd[1906]: 2025-07-12 00:08:09.761 [INFO][5580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58ltv" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"040330d5-fac5-4f81-94d3-dbbc52011ddb", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929", Pod:"coredns-7c65d6cfc9-58ltv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib21c7bf4855", MAC:"7e:05:a4:99:32:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:09.780235 containerd[1906]: 2025-07-12 00:08:09.775 [INFO][5580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58ltv" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:09.794410 containerd[1906]: time="2025-07-12T00:08:09.793956510Z" level=info msg="StartContainer for \"b4e546c13ab1f9867a80cb9b8a51b42e98b12c317ee44282820236088f7903a4\" returns successfully" Jul 12 00:08:09.816393 containerd[1906]: time="2025-07-12T00:08:09.815984420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:09.817127 containerd[1906]: time="2025-07-12T00:08:09.816037020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:09.817127 containerd[1906]: time="2025-07-12T00:08:09.816317020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:09.817127 containerd[1906]: time="2025-07-12T00:08:09.816450539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:09.876657 containerd[1906]: time="2025-07-12T00:08:09.876551030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58ltv,Uid:040330d5-fac5-4f81-94d3-dbbc52011ddb,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929\"" Jul 12 00:08:09.880665 containerd[1906]: time="2025-07-12T00:08:09.880430988Z" level=info msg="CreateContainer within sandbox \"ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:09.948078 containerd[1906]: time="2025-07-12T00:08:09.947970715Z" level=info msg="CreateContainer within sandbox \"ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55c47c48fb9b8540acb8b8f3e76af29a9a02374e25a90da6add35bd142069660\"" Jul 12 00:08:09.949019 containerd[1906]: time="2025-07-12T00:08:09.948925955Z" level=info msg="StartContainer for \"55c47c48fb9b8540acb8b8f3e76af29a9a02374e25a90da6add35bd142069660\"" Jul 12 00:08:10.003411 containerd[1906]: time="2025-07-12T00:08:10.003309528Z" level=info msg="StartContainer for \"55c47c48fb9b8540acb8b8f3e76af29a9a02374e25a90da6add35bd142069660\" returns successfully" Jul 12 00:08:10.733210 kubelet[3316]: I0712 00:08:10.732079 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-58ltv" podStartSLOduration=48.732056413 podStartE2EDuration="48.732056413s" podCreationTimestamp="2025-07-12 00:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:10.69825179 +0000 UTC m=+55.440382896" watchObservedRunningTime="2025-07-12 00:08:10.732056413 +0000 UTC m=+55.474187519" Jul 12 00:08:10.733615 kubelet[3316]: I0712 00:08:10.733462 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5649f74b8d-dpdd7" podStartSLOduration=27.180554109 podStartE2EDuration="30.733443413s" podCreationTimestamp="2025-07-12 00:07:40 +0000 UTC" firstStartedPulling="2025-07-12 00:08:06.06873285 +0000 UTC m=+50.810863916" lastFinishedPulling="2025-07-12 00:08:09.621622114 +0000 UTC m=+54.363753220" observedRunningTime="2025-07-12 00:08:10.733215693 +0000 UTC m=+55.475346799" watchObservedRunningTime="2025-07-12 00:08:10.733443413 +0000 UTC m=+55.475574519" Jul 12 00:08:10.886304 systemd-networkd[1392]: calib21c7bf4855: Gained IPv6LL Jul 12 00:08:11.976739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322439580.mount: Deactivated successfully. Jul 12 00:08:13.103127 containerd[1906]: time="2025-07-12T00:08:13.102552058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.105726 containerd[1906]: time="2025-07-12T00:08:13.105674897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:08:13.110613 containerd[1906]: time="2025-07-12T00:08:13.110563614Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.118054 containerd[1906]: time="2025-07-12T00:08:13.118015211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:13.118904 containerd[1906]: time="2025-07-12T00:08:13.118547610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.496212296s" Jul 12 00:08:13.118904 containerd[1906]: time="2025-07-12T00:08:13.118580530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:08:13.119905 containerd[1906]: time="2025-07-12T00:08:13.119611330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:08:13.121656 containerd[1906]: time="2025-07-12T00:08:13.121526969Z" level=info msg="CreateContainer within sandbox \"924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:08:13.172828 containerd[1906]: time="2025-07-12T00:08:13.172779704Z" level=info msg="CreateContainer within sandbox \"924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d34d3432cc62259d0f391704b13a8b9b98605aecf3c51fdc979c4fedc002c51d\"" Jul 12 00:08:13.173575 containerd[1906]: time="2025-07-12T00:08:13.173355264Z" level=info msg="StartContainer for \"d34d3432cc62259d0f391704b13a8b9b98605aecf3c51fdc979c4fedc002c51d\"" Jul 12 00:08:13.261650 containerd[1906]: time="2025-07-12T00:08:13.261592661Z" level=info msg="StartContainer for \"d34d3432cc62259d0f391704b13a8b9b98605aecf3c51fdc979c4fedc002c51d\" returns successfully" Jul 12 00:08:13.737580 systemd[1]: run-containerd-runc-k8s.io-d34d3432cc62259d0f391704b13a8b9b98605aecf3c51fdc979c4fedc002c51d-runc.tNZqyH.mount: Deactivated successfully. Jul 12 00:08:14.775620 containerd[1906]: time="2025-07-12T00:08:14.775566768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:14.785475 containerd[1906]: time="2025-07-12T00:08:14.785344204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:08:14.792304 containerd[1906]: time="2025-07-12T00:08:14.792251001Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:14.801382 containerd[1906]: time="2025-07-12T00:08:14.801328756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:14.802967 containerd[1906]: time="2025-07-12T00:08:14.802881076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.683240426s" Jul 12 00:08:14.802967 containerd[1906]: time="2025-07-12T00:08:14.802936196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:08:14.806321 containerd[1906]: time="2025-07-12T00:08:14.804599675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:08:14.807787 containerd[1906]: time="2025-07-12T00:08:14.807748913Z" level=info msg="CreateContainer within sandbox \"a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:08:14.874221 containerd[1906]: time="2025-07-12T00:08:14.874180882Z" level=info msg="CreateContainer within sandbox \"a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"504c8aeb68b015a2e53ad3f2d1678568e2ab1aec0710ad27d5049b377c33e127\"" Jul 12 00:08:14.875066 containerd[1906]: time="2025-07-12T00:08:14.875034562Z" level=info msg="StartContainer for \"504c8aeb68b015a2e53ad3f2d1678568e2ab1aec0710ad27d5049b377c33e127\"" Jul 12 00:08:14.931501 containerd[1906]: time="2025-07-12T00:08:14.931444095Z" level=info msg="StartContainer for \"504c8aeb68b015a2e53ad3f2d1678568e2ab1aec0710ad27d5049b377c33e127\" returns successfully" Jul 12 00:08:15.554024 containerd[1906]: time="2025-07-12T00:08:15.553982883Z" level=info msg="StopPodSandbox for \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\"" Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.595 [WARNING][5928] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"040330d5-fac5-4f81-94d3-dbbc52011ddb", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929", Pod:"coredns-7c65d6cfc9-58ltv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib21c7bf4855", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.596 [INFO][5928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.596 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" iface="eth0" netns="" Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.596 [INFO][5928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.596 [INFO][5928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.614 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.614 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.614 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.622 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.622 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.623 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:15.626962 containerd[1906]: 2025-07-12 00:08:15.625 [INFO][5928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:15.626962 containerd[1906]: time="2025-07-12T00:08:15.626935169Z" level=info msg="TearDown network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\" successfully" Jul 12 00:08:15.626962 containerd[1906]: time="2025-07-12T00:08:15.626961489Z" level=info msg="StopPodSandbox for \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\" returns successfully" Jul 12 00:08:15.628109 containerd[1906]: time="2025-07-12T00:08:15.628012809Z" level=info msg="RemovePodSandbox for \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\"" Jul 12 00:08:15.629278 containerd[1906]: time="2025-07-12T00:08:15.629246768Z" level=info msg="Forcibly stopping sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\"" Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.661 [WARNING][5950] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"040330d5-fac5-4f81-94d3-dbbc52011ddb", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"ac73e75cceacce2571993536171d34f91808f3be7a2038050c8fcf329a192929", Pod:"coredns-7c65d6cfc9-58ltv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib21c7bf4855", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.661 [INFO][5950] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.661 [INFO][5950] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" iface="eth0" netns="" Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.661 [INFO][5950] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.661 [INFO][5950] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.679 [INFO][5957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.679 [INFO][5957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.679 [INFO][5957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.689 [WARNING][5957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.689 [INFO][5957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" HandleID="k8s-pod-network.b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--58ltv-eth0" Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.690 [INFO][5957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:15.695614 containerd[1906]: 2025-07-12 00:08:15.693 [INFO][5950] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b" Jul 12 00:08:15.696128 containerd[1906]: time="2025-07-12T00:08:15.695638617Z" level=info msg="TearDown network for sandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\" successfully" Jul 12 00:08:15.718747 containerd[1906]: time="2025-07-12T00:08:15.718699606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:15.718865 containerd[1906]: time="2025-07-12T00:08:15.718775726Z" level=info msg="RemovePodSandbox \"b15111427efc9b5e32f954c262c1d9f58565131eec6881fd6f340ade5a0a279b\" returns successfully" Jul 12 00:08:15.719566 containerd[1906]: time="2025-07-12T00:08:15.719335246Z" level=info msg="StopPodSandbox for \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\"" Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.753 [WARNING][5971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a8289963-d035-4bbc-9efb-a53e9428a42b", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0", Pod:"goldmane-58fd7646b9-jddn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0bc4d3738a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.753 [INFO][5971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.753 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" iface="eth0" netns="" Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.753 [INFO][5971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.753 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.770 [INFO][5978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.770 [INFO][5978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.770 [INFO][5978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.778 [WARNING][5978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.778 [INFO][5978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.780 [INFO][5978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:15.783663 containerd[1906]: 2025-07-12 00:08:15.782 [INFO][5971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:15.784359 containerd[1906]: time="2025-07-12T00:08:15.783698616Z" level=info msg="TearDown network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\" successfully" Jul 12 00:08:15.784359 containerd[1906]: time="2025-07-12T00:08:15.783723055Z" level=info msg="StopPodSandbox for \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\" returns successfully" Jul 12 00:08:15.784359 containerd[1906]: time="2025-07-12T00:08:15.784334415Z" level=info msg="RemovePodSandbox for \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\"" Jul 12 00:08:15.784359 containerd[1906]: time="2025-07-12T00:08:15.784360015Z" level=info msg="Forcibly stopping sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\"" Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.818 [WARNING][5992] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a8289963-d035-4bbc-9efb-a53e9428a42b", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"924058ce72da4e00630a6025b42e3f19632fc5ecc134ff832329b4bd262018e0", Pod:"goldmane-58fd7646b9-jddn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0bc4d3738a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.818 [INFO][5992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.818 [INFO][5992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" iface="eth0" netns="" Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.818 [INFO][5992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.818 [INFO][5992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.837 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.837 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.837 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.845 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.846 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" HandleID="k8s-pod-network.225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-goldmane--58fd7646b9--jddn9-eth0" Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.847 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:15.850732 containerd[1906]: 2025-07-12 00:08:15.848 [INFO][5992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8" Jul 12 00:08:15.850732 containerd[1906]: time="2025-07-12T00:08:15.850684224Z" level=info msg="TearDown network for sandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\" successfully" Jul 12 00:08:15.866552 containerd[1906]: time="2025-07-12T00:08:15.866493457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:15.866737 containerd[1906]: time="2025-07-12T00:08:15.866575697Z" level=info msg="RemovePodSandbox \"225779e25e3beb6ea4a5dbb71c086cd89eb7259b3b520bcfd6c5f475864ed0c8\" returns successfully" Jul 12 00:08:15.867451 containerd[1906]: time="2025-07-12T00:08:15.867193816Z" level=info msg="StopPodSandbox for \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\"" Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.899 [WARNING][6013] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0", GenerateName:"calico-apiserver-b47f68fd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"31fec83d-971d-44e2-913b-a79f6e564b60", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47f68fd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38", Pod:"calico-apiserver-b47f68fd8-s5g4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic28a73b6fda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.899 [INFO][6013] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.899 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" iface="eth0" netns="" Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.899 [INFO][6013] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.899 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.919 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.919 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.919 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.928 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.929 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.930 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:15.934474 containerd[1906]: 2025-07-12 00:08:15.933 [INFO][6013] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:15.935079 containerd[1906]: time="2025-07-12T00:08:15.934688105Z" level=info msg="TearDown network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\" successfully" Jul 12 00:08:15.935079 containerd[1906]: time="2025-07-12T00:08:15.934714065Z" level=info msg="StopPodSandbox for \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\" returns successfully" Jul 12 00:08:15.935764 containerd[1906]: time="2025-07-12T00:08:15.935509024Z" level=info msg="RemovePodSandbox for \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\"" Jul 12 00:08:15.935764 containerd[1906]: time="2025-07-12T00:08:15.935537344Z" level=info msg="Forcibly stopping sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\"" Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:15.979 [WARNING][6034] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0", GenerateName:"calico-apiserver-b47f68fd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"31fec83d-971d-44e2-913b-a79f6e564b60", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47f68fd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38", Pod:"calico-apiserver-b47f68fd8-s5g4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic28a73b6fda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:15.979 [INFO][6034] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:15.979 [INFO][6034] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" iface="eth0" netns="" Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:15.979 [INFO][6034] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:15.979 [INFO][6034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:16.004 [INFO][6042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:16.004 [INFO][6042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:16.004 [INFO][6042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:16.014 [WARNING][6042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:16.014 [INFO][6042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" HandleID="k8s-pod-network.d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--s5g4q-eth0" Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:16.015 [INFO][6042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:16.019202 containerd[1906]: 2025-07-12 00:08:16.017 [INFO][6034] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5" Jul 12 00:08:16.019202 containerd[1906]: time="2025-07-12T00:08:16.018894265Z" level=info msg="TearDown network for sandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\" successfully" Jul 12 00:08:16.033327 containerd[1906]: time="2025-07-12T00:08:16.033278778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:16.033549 containerd[1906]: time="2025-07-12T00:08:16.033448058Z" level=info msg="RemovePodSandbox \"d7405fc3e0a02393e05022a77b0f38002aef06af44a0aa9ad1146501e3ee88f5\" returns successfully" Jul 12 00:08:16.034443 containerd[1906]: time="2025-07-12T00:08:16.034418578Z" level=info msg="StopPodSandbox for \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\"" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.075 [WARNING][6057] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.075 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.075 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" iface="eth0" netns="" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.075 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.075 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.094 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.094 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.094 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.103 [WARNING][6064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.103 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.104 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:16.107279 containerd[1906]: 2025-07-12 00:08:16.105 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:08:16.107279 containerd[1906]: time="2025-07-12T00:08:16.107176584Z" level=info msg="TearDown network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\" successfully" Jul 12 00:08:16.107279 containerd[1906]: time="2025-07-12T00:08:16.107203784Z" level=info msg="StopPodSandbox for \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\" returns successfully" Jul 12 00:08:16.108785 containerd[1906]: time="2025-07-12T00:08:16.108470983Z" level=info msg="RemovePodSandbox for \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\"" Jul 12 00:08:16.108785 containerd[1906]: time="2025-07-12T00:08:16.108513623Z" level=info msg="Forcibly stopping sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\"" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.143 [WARNING][6078] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" WorkloadEndpoint="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.143 [INFO][6078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.143 [INFO][6078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" iface="eth0" netns="" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.143 [INFO][6078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.143 [INFO][6078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.162 [INFO][6085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.162 [INFO][6085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.163 [INFO][6085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.174 [WARNING][6085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.174 [INFO][6085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" HandleID="k8s-pod-network.f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-whisker--6488b96975--7fwjv-eth0" Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.176 [INFO][6085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:16.178808 containerd[1906]: 2025-07-12 00:08:16.177 [INFO][6078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210" Jul 12 00:08:16.179796 containerd[1906]: time="2025-07-12T00:08:16.179325630Z" level=info msg="TearDown network for sandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\" successfully" Jul 12 00:08:16.886368 containerd[1906]: time="2025-07-12T00:08:16.886268178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:16.886368 containerd[1906]: time="2025-07-12T00:08:16.886356338Z" level=info msg="RemovePodSandbox \"f07f46687783c8b797483ba09d7dc0ecd300929ac6c8d2d13e5fcd496452a210\" returns successfully" Jul 12 00:08:16.893869 containerd[1906]: time="2025-07-12T00:08:16.893835175Z" level=info msg="StopPodSandbox for \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\"" Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.932 [WARNING][6103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0", GenerateName:"calico-kube-controllers-5649f74b8d-", Namespace:"calico-system", SelfLink:"", UID:"d03f5dbf-aed9-446d-a8ba-7e995cb04351", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5649f74b8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999", Pod:"calico-kube-controllers-5649f74b8d-dpdd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09b9ed1c00a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.932 [INFO][6103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.932 [INFO][6103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" iface="eth0" netns="" Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.932 [INFO][6103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.932 [INFO][6103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.962 [INFO][6110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.962 [INFO][6110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.962 [INFO][6110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.977 [WARNING][6110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.977 [INFO][6110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.980 [INFO][6110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:16.985504 containerd[1906]: 2025-07-12 00:08:16.983 [INFO][6103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:16.986241 containerd[1906]: time="2025-07-12T00:08:16.985765092Z" level=info msg="TearDown network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\" successfully" Jul 12 00:08:16.986241 containerd[1906]: time="2025-07-12T00:08:16.985792612Z" level=info msg="StopPodSandbox for \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\" returns successfully" Jul 12 00:08:16.986819 containerd[1906]: time="2025-07-12T00:08:16.986787611Z" level=info msg="RemovePodSandbox for \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\"" Jul 12 00:08:16.986876 containerd[1906]: time="2025-07-12T00:08:16.986824531Z" level=info msg="Forcibly stopping sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\"" Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.022 [WARNING][6124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0", GenerateName:"calico-kube-controllers-5649f74b8d-", Namespace:"calico-system", SelfLink:"", UID:"d03f5dbf-aed9-446d-a8ba-7e995cb04351", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5649f74b8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"ebfeedd934e414c6868a088429689c97b43845223dca41f5db2efa751cae7999", Pod:"calico-kube-controllers-5649f74b8d-dpdd7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09b9ed1c00a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.022 [INFO][6124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.022 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" iface="eth0" netns="" Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.022 [INFO][6124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.022 [INFO][6124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.051 [INFO][6132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.051 [INFO][6132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.051 [INFO][6132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.061 [WARNING][6132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.061 [INFO][6132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" HandleID="k8s-pod-network.3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--kube--controllers--5649f74b8d--dpdd7-eth0" Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.063 [INFO][6132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:17.072752 containerd[1906]: 2025-07-12 00:08:17.069 [INFO][6124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7" Jul 12 00:08:17.073273 containerd[1906]: time="2025-07-12T00:08:17.072752651Z" level=info msg="TearDown network for sandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\" successfully" Jul 12 00:08:17.083554 containerd[1906]: time="2025-07-12T00:08:17.083491566Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:17.083792 containerd[1906]: time="2025-07-12T00:08:17.083571846Z" level=info msg="RemovePodSandbox \"3cf611310be71ade4891d793af1be7657045d1aabdfae6a9a3db2e54958d17d7\" returns successfully" Jul 12 00:08:17.084291 containerd[1906]: time="2025-07-12T00:08:17.084257485Z" level=info msg="StopPodSandbox for \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\"" Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.122 [WARNING][6146] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b68f90c7-c121-47a4-9328-c85559bf7c5c", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7", Pod:"csi-node-driver-7wgr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a6761bde42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.122 [INFO][6146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.122 [INFO][6146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" iface="eth0" netns="" Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.122 [INFO][6146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.122 [INFO][6146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.153 [INFO][6153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.153 [INFO][6153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.153 [INFO][6153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.162 [WARNING][6153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.162 [INFO][6153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.164 [INFO][6153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:17.167484 containerd[1906]: 2025-07-12 00:08:17.165 [INFO][6146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:17.167484 containerd[1906]: time="2025-07-12T00:08:17.167395646Z" level=info msg="TearDown network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\" successfully" Jul 12 00:08:17.167484 containerd[1906]: time="2025-07-12T00:08:17.167419006Z" level=info msg="StopPodSandbox for \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\" returns successfully" Jul 12 00:08:17.169004 containerd[1906]: time="2025-07-12T00:08:17.168725686Z" level=info msg="RemovePodSandbox for \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\"" Jul 12 00:08:17.169004 containerd[1906]: time="2025-07-12T00:08:17.168759526Z" level=info msg="Forcibly stopping sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\"" Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.229 [WARNING][6168] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b68f90c7-c121-47a4-9328-c85559bf7c5c", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7", Pod:"csi-node-driver-7wgr4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4a6761bde42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.230 [INFO][6168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.230 [INFO][6168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" iface="eth0" netns="" Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.230 [INFO][6168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.230 [INFO][6168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.252 [INFO][6175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.252 [INFO][6175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.252 [INFO][6175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.260 [WARNING][6175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.260 [INFO][6175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" HandleID="k8s-pod-network.ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-csi--node--driver--7wgr4-eth0" Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.262 [INFO][6175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:17.264749 containerd[1906]: 2025-07-12 00:08:17.263 [INFO][6168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce" Jul 12 00:08:17.265207 containerd[1906]: time="2025-07-12T00:08:17.264962081Z" level=info msg="TearDown network for sandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\" successfully" Jul 12 00:08:17.278110 containerd[1906]: time="2025-07-12T00:08:17.277801755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:17.278449 containerd[1906]: time="2025-07-12T00:08:17.278299794Z" level=info msg="RemovePodSandbox \"ae9232c31a8e4ae538a19cfad77f2bea3a7c614c1c465f608c3fd49fc645fbce\" returns successfully" Jul 12 00:08:17.279585 containerd[1906]: time="2025-07-12T00:08:17.279516274Z" level=info msg="StopPodSandbox for \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\"" Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.325 [WARNING][6189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0", GenerateName:"calico-apiserver-b47f68fd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"90652f12-5f24-4f92-ba31-8e8fc442c377", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47f68fd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23", Pod:"calico-apiserver-b47f68fd8-ttd9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68119f7e6c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.325 [INFO][6189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.325 [INFO][6189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" iface="eth0" netns="" Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.325 [INFO][6189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.325 [INFO][6189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.348 [INFO][6197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.349 [INFO][6197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.349 [INFO][6197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.358 [WARNING][6197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.358 [INFO][6197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.360 [INFO][6197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:17.363374 containerd[1906]: 2025-07-12 00:08:17.361 [INFO][6189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:17.363823 containerd[1906]: time="2025-07-12T00:08:17.363404834Z" level=info msg="TearDown network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\" successfully" Jul 12 00:08:17.363823 containerd[1906]: time="2025-07-12T00:08:17.363434834Z" level=info msg="StopPodSandbox for \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\" returns successfully" Jul 12 00:08:17.364328 containerd[1906]: time="2025-07-12T00:08:17.364281394Z" level=info msg="RemovePodSandbox for \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\"" Jul 12 00:08:17.364328 containerd[1906]: time="2025-07-12T00:08:17.364323514Z" level=info msg="Forcibly stopping sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\"" Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.405 [WARNING][6211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0", GenerateName:"calico-apiserver-b47f68fd8-", Namespace:"calico-apiserver", SelfLink:"", UID:"90652f12-5f24-4f92-ba31-8e8fc442c377", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b47f68fd8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23", Pod:"calico-apiserver-b47f68fd8-ttd9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68119f7e6c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.406 [INFO][6211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.406 [INFO][6211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" iface="eth0" netns="" Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.406 [INFO][6211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.406 [INFO][6211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.442 [INFO][6218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.442 [INFO][6218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.442 [INFO][6218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.454 [WARNING][6218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.454 [INFO][6218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" HandleID="k8s-pod-network.dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-calico--apiserver--b47f68fd8--ttd9w-eth0" Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.456 [INFO][6218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:17.459433 containerd[1906]: 2025-07-12 00:08:17.457 [INFO][6211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf" Jul 12 00:08:17.459939 containerd[1906]: time="2025-07-12T00:08:17.459537789Z" level=info msg="TearDown network for sandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\" successfully" Jul 12 00:08:17.475034 containerd[1906]: time="2025-07-12T00:08:17.474956062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:17.475034 containerd[1906]: time="2025-07-12T00:08:17.475040182Z" level=info msg="RemovePodSandbox \"dd85fa63ad8d5e48c9129267c329ca74f687a80b7ad42f91cdd6e7108eda1adf\" returns successfully" Jul 12 00:08:17.476186 containerd[1906]: time="2025-07-12T00:08:17.475901902Z" level=info msg="StopPodSandbox for \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\"" Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.516 [WARNING][6233] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"63578b12-3c8e-4f1d-8853-0022228cafa4", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278", Pod:"coredns-7c65d6cfc9-blcvm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid8469dd7741", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.516 [INFO][6233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.516 [INFO][6233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" iface="eth0" netns="" Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.516 [INFO][6233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.516 [INFO][6233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.542 [INFO][6241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.542 [INFO][6241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.542 [INFO][6241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.551 [WARNING][6241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.551 [INFO][6241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.553 [INFO][6241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:17.555876 containerd[1906]: 2025-07-12 00:08:17.554 [INFO][6233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:17.556502 containerd[1906]: time="2025-07-12T00:08:17.556376344Z" level=info msg="TearDown network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\" successfully" Jul 12 00:08:17.556502 containerd[1906]: time="2025-07-12T00:08:17.556406544Z" level=info msg="StopPodSandbox for \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\" returns successfully" Jul 12 00:08:17.557128 containerd[1906]: time="2025-07-12T00:08:17.557074224Z" level=info msg="RemovePodSandbox for \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\"" Jul 12 00:08:17.557461 containerd[1906]: time="2025-07-12T00:08:17.557202583Z" level=info msg="Forcibly stopping sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\"" Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.596 [WARNING][6255] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"63578b12-3c8e-4f1d-8853-0022228cafa4", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.4-n-0fb9ec6aad", ContainerID:"483bc117a4e1c42c1291e92b90045dcf49cc804a69001a38d05995e8272c5278", Pod:"coredns-7c65d6cfc9-blcvm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid8469dd7741", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.597 [INFO][6255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.597 [INFO][6255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" iface="eth0" netns="" Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.597 [INFO][6255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.597 [INFO][6255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.621 [INFO][6262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.621 [INFO][6262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.622 [INFO][6262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.636 [WARNING][6262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.636 [INFO][6262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" HandleID="k8s-pod-network.1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Workload="ci--4081.3.4--n--0fb9ec6aad-k8s-coredns--7c65d6cfc9--blcvm-eth0" Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.638 [INFO][6262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:08:17.644676 containerd[1906]: 2025-07-12 00:08:17.642 [INFO][6255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376" Jul 12 00:08:17.645208 containerd[1906]: time="2025-07-12T00:08:17.644696662Z" level=info msg="TearDown network for sandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\" successfully" Jul 12 00:08:18.196605 containerd[1906]: time="2025-07-12T00:08:18.196416924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:08:18.196605 containerd[1906]: time="2025-07-12T00:08:18.196490044Z" level=info msg="RemovePodSandbox \"1b438eed0548a86b0f676d39b62ab6cc63d87d209f894c9b147d12df3514b376\" returns successfully" Jul 12 00:08:18.254283 containerd[1906]: time="2025-07-12T00:08:18.254209536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:18.258571 containerd[1906]: time="2025-07-12T00:08:18.258519014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:08:18.272188 containerd[1906]: time="2025-07-12T00:08:18.272111088Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:18.279425 containerd[1906]: time="2025-07-12T00:08:18.279349045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:18.280497 containerd[1906]: time="2025-07-12T00:08:18.280113884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 3.475480449s" Jul 12 00:08:18.280497 containerd[1906]: time="2025-07-12T00:08:18.280148124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:08:18.281539 containerd[1906]: time="2025-07-12T00:08:18.281499564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:08:18.283568 containerd[1906]: time="2025-07-12T00:08:18.283504603Z" level=info msg="CreateContainer within sandbox \"35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:08:18.337669 containerd[1906]: time="2025-07-12T00:08:18.337621577Z" level=info msg="CreateContainer within sandbox \"35269171c48539b764bd05bfa2cc2604c520c7a281c1b7f8df34e9b88532ff23\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8bb7f9d79a02b6cb229345eb9822ecc4c3f3655543af77bf36d20f64ce36e753\"" Jul 12 00:08:18.338414 containerd[1906]: time="2025-07-12T00:08:18.338380537Z" level=info msg="StartContainer for \"8bb7f9d79a02b6cb229345eb9822ecc4c3f3655543af77bf36d20f64ce36e753\"" Jul 12 00:08:18.397960 containerd[1906]: time="2025-07-12T00:08:18.397911869Z" level=info msg="StartContainer for \"8bb7f9d79a02b6cb229345eb9822ecc4c3f3655543af77bf36d20f64ce36e753\" returns successfully" Jul 12 00:08:18.696859 containerd[1906]: time="2025-07-12T00:08:18.696124729Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:18.703207 containerd[1906]: time="2025-07-12T00:08:18.702483806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:08:18.704317 containerd[1906]: time="2025-07-12T00:08:18.704291205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 422.751481ms" Jul 12 00:08:18.704429 containerd[1906]: time="2025-07-12T00:08:18.704414525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:08:18.706572 containerd[1906]: time="2025-07-12T00:08:18.706552924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:08:18.708466 containerd[1906]: time="2025-07-12T00:08:18.708431923Z" level=info msg="CreateContainer within sandbox \"7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:08:18.762364 containerd[1906]: time="2025-07-12T00:08:18.762323538Z" level=info msg="CreateContainer within sandbox \"7a9ba4cdf94ed68a2392f6d66f3d8204d23c4a91432eef745c9627d1d0202a38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"63a639aa16aa18d21878baa26de84b43ce50b978dfa2ec0371630e386ee1763a\"" Jul 12 00:08:18.764264 containerd[1906]: time="2025-07-12T00:08:18.763107458Z" level=info msg="StartContainer for \"63a639aa16aa18d21878baa26de84b43ce50b978dfa2ec0371630e386ee1763a\"" Jul 12 00:08:18.838773 kubelet[3316]: I0712 00:08:18.836724 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-jddn9" podStartSLOduration=31.850441591 podStartE2EDuration="38.836706903s" podCreationTimestamp="2025-07-12 00:07:40 +0000 UTC" firstStartedPulling="2025-07-12 00:08:06.133246058 +0000 UTC m=+50.875377164" lastFinishedPulling="2025-07-12 00:08:13.11951137 +0000 UTC m=+57.861642476" observedRunningTime="2025-07-12 00:08:13.705878204 +0000 UTC m=+58.448009310" watchObservedRunningTime="2025-07-12 00:08:18.836706903 +0000 UTC m=+63.578837969" Jul 12 00:08:18.842193 kubelet[3316]: I0712 00:08:18.841781 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b47f68fd8-ttd9w" podStartSLOduration=33.816287004 podStartE2EDuration="44.841761941s" podCreationTimestamp="2025-07-12 00:07:34 +0000 UTC" firstStartedPulling="2025-07-12 00:08:07.255537787 +0000 UTC m=+51.997668893" lastFinishedPulling="2025-07-12 00:08:18.281012724 +0000 UTC m=+63.023143830" observedRunningTime="2025-07-12 00:08:18.836422063 +0000 UTC m=+63.578553169" watchObservedRunningTime="2025-07-12 00:08:18.841761941 +0000 UTC m=+63.583893047" Jul 12 00:08:18.873408 containerd[1906]: time="2025-07-12T00:08:18.873034086Z" level=info msg="StartContainer for \"63a639aa16aa18d21878baa26de84b43ce50b978dfa2ec0371630e386ee1763a\" returns successfully" Jul 12 00:08:19.326347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964325974.mount: Deactivated successfully. Jul 12 00:08:19.852125 kubelet[3316]: I0712 00:08:19.850445 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b47f68fd8-s5g4q" podStartSLOduration=34.855242789 podStartE2EDuration="45.850425788s" podCreationTimestamp="2025-07-12 00:07:34 +0000 UTC" firstStartedPulling="2025-07-12 00:08:07.709965126 +0000 UTC m=+52.452096232" lastFinishedPulling="2025-07-12 00:08:18.705148125 +0000 UTC m=+63.447279231" observedRunningTime="2025-07-12 00:08:19.850243668 +0000 UTC m=+64.592374734" watchObservedRunningTime="2025-07-12 00:08:19.850425788 +0000 UTC m=+64.592556894" Jul 12 00:08:20.366538 containerd[1906]: time="2025-07-12T00:08:20.366484826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:20.371012 containerd[1906]: time="2025-07-12T00:08:20.370951543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:08:20.377211 containerd[1906]: time="2025-07-12T00:08:20.377162660Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:20.386147 containerd[1906]: time="2025-07-12T00:08:20.386069496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:20.387141 containerd[1906]: time="2025-07-12T00:08:20.386645816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.679982052s" Jul 12 00:08:20.387141 containerd[1906]: time="2025-07-12T00:08:20.386680936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:08:20.389449 containerd[1906]: time="2025-07-12T00:08:20.389394895Z" level=info msg="CreateContainer within sandbox \"a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:08:20.439849 containerd[1906]: time="2025-07-12T00:08:20.439777631Z" level=info msg="CreateContainer within sandbox \"a18c6acf999737b64bb09bc08b50ece8fe66096014e0336c81dde1185a71dde7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7e8b1a8cb3d29169ccd7f13d338e35cd0ae2fb4ba80679bef30aafc713e7ea27\"" Jul 12 00:08:20.440690 containerd[1906]: time="2025-07-12T00:08:20.440506831Z" level=info msg="StartContainer for \"7e8b1a8cb3d29169ccd7f13d338e35cd0ae2fb4ba80679bef30aafc713e7ea27\"" Jul 12 00:08:20.497755 containerd[1906]: time="2025-07-12T00:08:20.497589044Z" level=info msg="StartContainer for \"7e8b1a8cb3d29169ccd7f13d338e35cd0ae2fb4ba80679bef30aafc713e7ea27\" returns successfully" Jul 12 00:08:20.610795 kubelet[3316]: I0712 00:08:20.610722 3316 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:08:20.613675 kubelet[3316]: I0712 00:08:20.613561 3316 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:08:20.855447 kubelet[3316]: I0712 00:08:20.855375 3316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7wgr4" podStartSLOduration=27.59557367 podStartE2EDuration="40.855357436s" podCreationTimestamp="2025-07-12 00:07:40 +0000 UTC" firstStartedPulling="2025-07-12 00:08:07.12774217 +0000 UTC m=+51.869873276" lastFinishedPulling="2025-07-12 00:08:20.387525936 +0000 UTC m=+65.129657042" observedRunningTime="2025-07-12 00:08:20.854893956 +0000 UTC m=+65.597025022" watchObservedRunningTime="2025-07-12 00:08:20.855357436 +0000 UTC m=+65.597488502" Jul 12 00:09:14.726888 systemd[1]: run-containerd-runc-k8s.io-d34d3432cc62259d0f391704b13a8b9b98605aecf3c51fdc979c4fedc002c51d-runc.r7MXlz.mount: Deactivated successfully. Jul 12 00:09:38.007444 systemd[1]: Started sshd@7-10.200.20.44:22-10.200.16.10:34678.service - OpenSSH per-connection server daemon (10.200.16.10:34678). Jul 12 00:09:38.458958 sshd[6664]: Accepted publickey for core from 10.200.16.10 port 34678 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:38.460872 sshd[6664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:38.465026 systemd-logind[1763]: New session 10 of user core. Jul 12 00:09:38.472213 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:09:38.862326 sshd[6664]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:38.866007 systemd[1]: sshd@7-10.200.20.44:22-10.200.16.10:34678.service: Deactivated successfully. Jul 12 00:09:38.869495 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:09:38.871799 systemd-logind[1763]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:09:38.872758 systemd-logind[1763]: Removed session 10. Jul 12 00:09:43.938238 systemd[1]: Started sshd@8-10.200.20.44:22-10.200.16.10:49074.service - OpenSSH per-connection server daemon (10.200.16.10:49074). Jul 12 00:09:44.367139 sshd[6700]: Accepted publickey for core from 10.200.16.10 port 49074 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:44.366884 sshd[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:44.372262 systemd-logind[1763]: New session 11 of user core. Jul 12 00:09:44.377425 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:09:44.771758 sshd[6700]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:44.776821 systemd[1]: sshd@8-10.200.20.44:22-10.200.16.10:49074.service: Deactivated successfully. Jul 12 00:09:44.779656 systemd-logind[1763]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:09:44.779898 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:09:44.782291 systemd-logind[1763]: Removed session 11. Jul 12 00:09:49.848346 systemd[1]: Started sshd@9-10.200.20.44:22-10.200.16.10:44790.service - OpenSSH per-connection server daemon (10.200.16.10:44790). Jul 12 00:09:50.297041 sshd[6734]: Accepted publickey for core from 10.200.16.10 port 44790 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:50.298452 sshd[6734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:50.302803 systemd-logind[1763]: New session 12 of user core. Jul 12 00:09:50.306448 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:09:50.693242 sshd[6734]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:50.696201 systemd-logind[1763]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:09:50.696423 systemd[1]: sshd@9-10.200.20.44:22-10.200.16.10:44790.service: Deactivated successfully. Jul 12 00:09:50.699384 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:09:50.701496 systemd-logind[1763]: Removed session 12. Jul 12 00:09:50.776384 systemd[1]: Started sshd@10-10.200.20.44:22-10.200.16.10:44792.service - OpenSSH per-connection server daemon (10.200.16.10:44792). Jul 12 00:09:51.223838 sshd[6749]: Accepted publickey for core from 10.200.16.10 port 44792 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:51.225541 sshd[6749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:51.229373 systemd-logind[1763]: New session 13 of user core. Jul 12 00:09:51.236535 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:09:51.651665 sshd[6749]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:51.655214 systemd[1]: sshd@10-10.200.20.44:22-10.200.16.10:44792.service: Deactivated successfully. Jul 12 00:09:51.658363 systemd-logind[1763]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:09:51.659201 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:09:51.660426 systemd-logind[1763]: Removed session 13. Jul 12 00:09:51.731341 systemd[1]: Started sshd@11-10.200.20.44:22-10.200.16.10:44806.service - OpenSSH per-connection server daemon (10.200.16.10:44806). Jul 12 00:09:52.182313 sshd[6761]: Accepted publickey for core from 10.200.16.10 port 44806 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:52.183131 sshd[6761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:52.189666 systemd-logind[1763]: New session 14 of user core. Jul 12 00:09:52.197371 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:09:52.574346 sshd[6761]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:52.578255 systemd-logind[1763]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:09:52.578977 systemd[1]: sshd@11-10.200.20.44:22-10.200.16.10:44806.service: Deactivated successfully. Jul 12 00:09:52.582562 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:09:52.583875 systemd-logind[1763]: Removed session 14. Jul 12 00:09:57.662314 systemd[1]: Started sshd@12-10.200.20.44:22-10.200.16.10:44812.service - OpenSSH per-connection server daemon (10.200.16.10:44812). Jul 12 00:09:58.131581 sshd[6820]: Accepted publickey for core from 10.200.16.10 port 44812 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:09:58.132965 sshd[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:58.136973 systemd-logind[1763]: New session 15 of user core. Jul 12 00:09:58.142378 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:09:58.541163 sshd[6820]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:58.545242 systemd-logind[1763]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:09:58.546202 systemd[1]: sshd@12-10.200.20.44:22-10.200.16.10:44812.service: Deactivated successfully. Jul 12 00:09:58.549667 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:09:58.551835 systemd-logind[1763]: Removed session 15. Jul 12 00:10:03.612588 systemd[1]: Started sshd@13-10.200.20.44:22-10.200.16.10:39088.service - OpenSSH per-connection server daemon (10.200.16.10:39088). Jul 12 00:10:04.035922 sshd[6857]: Accepted publickey for core from 10.200.16.10 port 39088 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:04.037320 sshd[6857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:04.041142 systemd-logind[1763]: New session 16 of user core. Jul 12 00:10:04.044342 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:10:04.422338 sshd[6857]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:04.425188 systemd[1]: sshd@13-10.200.20.44:22-10.200.16.10:39088.service: Deactivated successfully. Jul 12 00:10:04.429215 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:10:04.430602 systemd-logind[1763]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:10:04.432127 systemd-logind[1763]: Removed session 16. Jul 12 00:10:09.512327 systemd[1]: Started sshd@14-10.200.20.44:22-10.200.16.10:39102.service - OpenSSH per-connection server daemon (10.200.16.10:39102). Jul 12 00:10:09.994772 sshd[6871]: Accepted publickey for core from 10.200.16.10 port 39102 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:09.996248 sshd[6871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:10.002209 systemd-logind[1763]: New session 17 of user core. Jul 12 00:10:10.007334 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:10:10.411999 sshd[6871]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:10.415453 systemd[1]: sshd@14-10.200.20.44:22-10.200.16.10:39102.service: Deactivated successfully. Jul 12 00:10:10.419025 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:10:10.420472 systemd-logind[1763]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:10:10.421415 systemd-logind[1763]: Removed session 17. Jul 12 00:10:15.487133 systemd[1]: Started sshd@15-10.200.20.44:22-10.200.16.10:35976.service - OpenSSH per-connection server daemon (10.200.16.10:35976). Jul 12 00:10:15.921642 sshd[6907]: Accepted publickey for core from 10.200.16.10 port 35976 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:15.923388 sshd[6907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:15.929674 systemd-logind[1763]: New session 18 of user core. Jul 12 00:10:15.934415 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:10:16.312451 sshd[6907]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:16.315924 systemd-logind[1763]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:10:16.316311 systemd[1]: sshd@15-10.200.20.44:22-10.200.16.10:35976.service: Deactivated successfully. Jul 12 00:10:16.320490 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:10:16.321883 systemd-logind[1763]: Removed session 18. Jul 12 00:10:16.400393 systemd[1]: Started sshd@16-10.200.20.44:22-10.200.16.10:35988.service - OpenSSH per-connection server daemon (10.200.16.10:35988). Jul 12 00:10:16.886409 sshd[6921]: Accepted publickey for core from 10.200.16.10 port 35988 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:16.887776 sshd[6921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:16.892232 systemd-logind[1763]: New session 19 of user core. Jul 12 00:10:16.895320 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:10:17.432325 sshd[6921]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:17.435503 systemd[1]: sshd@16-10.200.20.44:22-10.200.16.10:35988.service: Deactivated successfully. Jul 12 00:10:17.439913 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:10:17.441480 systemd-logind[1763]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:10:17.442495 systemd-logind[1763]: Removed session 19. Jul 12 00:10:17.505321 systemd[1]: Started sshd@17-10.200.20.44:22-10.200.16.10:35994.service - OpenSSH per-connection server daemon (10.200.16.10:35994). Jul 12 00:10:17.931143 sshd[6933]: Accepted publickey for core from 10.200.16.10 port 35994 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:17.933652 sshd[6933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:17.938277 systemd-logind[1763]: New session 20 of user core. Jul 12 00:10:17.941333 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:10:20.076316 sshd[6933]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:20.082501 systemd-logind[1763]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:10:20.084386 systemd[1]: sshd@17-10.200.20.44:22-10.200.16.10:35994.service: Deactivated successfully. Jul 12 00:10:20.089560 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:10:20.096129 systemd-logind[1763]: Removed session 20. Jul 12 00:10:20.157374 systemd[1]: Started sshd@18-10.200.20.44:22-10.200.16.10:58930.service - OpenSSH per-connection server daemon (10.200.16.10:58930). Jul 12 00:10:20.621959 sshd[6954]: Accepted publickey for core from 10.200.16.10 port 58930 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:20.623392 sshd[6954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:20.627577 systemd-logind[1763]: New session 21 of user core. Jul 12 00:10:20.633471 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:10:21.133906 sshd[6954]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:21.136840 systemd-logind[1763]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:10:21.138213 systemd[1]: sshd@18-10.200.20.44:22-10.200.16.10:58930.service: Deactivated successfully. Jul 12 00:10:21.139790 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:10:21.141446 systemd-logind[1763]: Removed session 21. Jul 12 00:10:21.215846 systemd[1]: Started sshd@19-10.200.20.44:22-10.200.16.10:58932.service - OpenSSH per-connection server daemon (10.200.16.10:58932). Jul 12 00:10:21.680917 sshd[6965]: Accepted publickey for core from 10.200.16.10 port 58932 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:21.682381 sshd[6965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:21.686481 systemd-logind[1763]: New session 22 of user core. Jul 12 00:10:21.695320 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:10:22.089510 sshd[6965]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:22.093687 systemd-logind[1763]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:10:22.094395 systemd[1]: sshd@19-10.200.20.44:22-10.200.16.10:58932.service: Deactivated successfully. Jul 12 00:10:22.097909 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:10:22.099031 systemd-logind[1763]: Removed session 22. Jul 12 00:10:27.172444 systemd[1]: Started sshd@20-10.200.20.44:22-10.200.16.10:58938.service - OpenSSH per-connection server daemon (10.200.16.10:58938). Jul 12 00:10:27.640113 sshd[7022]: Accepted publickey for core from 10.200.16.10 port 58938 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:27.641588 sshd[7022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:27.653983 systemd-logind[1763]: New session 23 of user core. Jul 12 00:10:27.659406 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:10:27.825475 systemd[1]: run-containerd-runc-k8s.io-fe6278fef3a1f5af8ff4b3dd72b11e936c2b67a6114bb50a91e6590c5bc3a9ab-runc.MLHobB.mount: Deactivated successfully. Jul 12 00:10:28.083560 sshd[7022]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:28.089639 systemd[1]: sshd@20-10.200.20.44:22-10.200.16.10:58938.service: Deactivated successfully. Jul 12 00:10:28.099625 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:10:28.100333 systemd-logind[1763]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:10:28.106401 systemd-logind[1763]: Removed session 23. Jul 12 00:10:33.162365 systemd[1]: Started sshd@21-10.200.20.44:22-10.200.16.10:48646.service - OpenSSH per-connection server daemon (10.200.16.10:48646). Jul 12 00:10:33.609714 sshd[7057]: Accepted publickey for core from 10.200.16.10 port 48646 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:33.611241 sshd[7057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:33.615175 systemd-logind[1763]: New session 24 of user core. Jul 12 00:10:33.619354 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:10:33.999040 sshd[7057]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:34.001830 systemd[1]: sshd@21-10.200.20.44:22-10.200.16.10:48646.service: Deactivated successfully. Jul 12 00:10:34.006139 systemd-logind[1763]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:10:34.006438 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:10:34.007604 systemd-logind[1763]: Removed session 24. Jul 12 00:10:39.085369 systemd[1]: Started sshd@22-10.200.20.44:22-10.200.16.10:48654.service - OpenSSH per-connection server daemon (10.200.16.10:48654). Jul 12 00:10:39.529400 sshd[7072]: Accepted publickey for core from 10.200.16.10 port 48654 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:39.530712 sshd[7072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:39.534824 systemd-logind[1763]: New session 25 of user core. Jul 12 00:10:39.543371 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:10:39.931545 sshd[7072]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:39.935795 systemd[1]: sshd@22-10.200.20.44:22-10.200.16.10:48654.service: Deactivated successfully. Jul 12 00:10:39.938517 systemd-logind[1763]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:10:39.939011 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:10:39.940164 systemd-logind[1763]: Removed session 25. Jul 12 00:10:45.015373 systemd[1]: Started sshd@23-10.200.20.44:22-10.200.16.10:51938.service - OpenSSH per-connection server daemon (10.200.16.10:51938). Jul 12 00:10:45.462800 sshd[7086]: Accepted publickey for core from 10.200.16.10 port 51938 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:45.464355 sshd[7086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:45.473149 systemd-logind[1763]: New session 26 of user core. Jul 12 00:10:45.477364 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:10:45.935396 sshd[7086]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:45.938740 systemd-logind[1763]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:10:45.940949 systemd[1]: sshd@23-10.200.20.44:22-10.200.16.10:51938.service: Deactivated successfully. Jul 12 00:10:45.945790 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:10:45.947134 systemd-logind[1763]: Removed session 26. Jul 12 00:10:51.024293 systemd[1]: Started sshd@24-10.200.20.44:22-10.200.16.10:42952.service - OpenSSH per-connection server daemon (10.200.16.10:42952). Jul 12 00:10:51.489120 sshd[7129]: Accepted publickey for core from 10.200.16.10 port 42952 ssh2: RSA SHA256:qH9VHaRtpiO4lAf4wpNpdYfuR0Irqn0Eedjb62ue/vA Jul 12 00:10:51.490568 sshd[7129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:10:51.497327 systemd-logind[1763]: New session 27 of user core. Jul 12 00:10:51.502404 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:10:51.894280 sshd[7129]: pam_unix(sshd:session): session closed for user core Jul 12 00:10:51.899245 systemd-logind[1763]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:10:51.899609 systemd[1]: sshd@24-10.200.20.44:22-10.200.16.10:42952.service: Deactivated successfully. Jul 12 00:10:51.903515 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:10:51.904604 systemd-logind[1763]: Removed session 27.