Jul 12 00:11:45.883721 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:11:45.883750 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:11:45.883764 kernel: KASLR enabled Jul 12 00:11:45.883773 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jul 12 00:11:45.883781 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jul 12 00:11:45.883807 kernel: random: crng init done Jul 12 00:11:45.883820 kernel: ACPI: Early table checksum verification disabled Jul 12 00:11:45.883829 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jul 12 00:11:45.883837 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:11:45.884153 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884165 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884173 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884182 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884191 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884202 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884213 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884223 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884232 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:11:45.884241 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jul 12 00:11:45.884250 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jul 12 00:11:45.884259 kernel: NUMA: Failed to initialise from firmware Jul 12 00:11:45.884268 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jul 12 00:11:45.884277 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Jul 12 00:11:45.884286 kernel: Zone ranges: Jul 12 00:11:45.884295 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:11:45.884306 kernel: DMA32 empty Jul 12 00:11:45.884315 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jul 12 00:11:45.884324 kernel: Movable zone start for each node Jul 12 00:11:45.884333 kernel: Early memory node ranges Jul 12 00:11:45.884342 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jul 12 00:11:45.884351 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jul 12 00:11:45.884360 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jul 12 00:11:45.884369 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jul 12 00:11:45.884378 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jul 12 00:11:45.884387 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jul 12 00:11:45.884397 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jul 12 00:11:45.884406 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jul 12 00:11:45.884417 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jul 12 00:11:45.884426 kernel: psci: probing for conduit method from ACPI. Jul 12 00:11:45.884435 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:11:45.884447 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:11:45.884473 kernel: psci: Trusted OS migration not required Jul 12 00:11:45.884483 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:11:45.884496 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:11:45.884506 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:11:45.884515 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:11:45.884525 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:11:45.884535 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:11:45.884543 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:11:45.884551 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:11:45.884558 kernel: CPU features: detected: Spectre-v4 Jul 12 00:11:45.884566 kernel: CPU features: detected: Spectre-BHB Jul 12 00:11:45.884573 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:11:45.884583 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:11:45.884590 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:11:45.884598 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:11:45.884654 kernel: alternatives: applying boot alternatives Jul 12 00:11:45.884665 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:11:45.884673 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:11:45.884681 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:11:45.884689 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:11:45.884696 kernel: Fallback order for Node 0: 0 Jul 12 00:11:45.884704 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jul 12 00:11:45.884711 kernel: Policy zone: Normal Jul 12 00:11:45.884721 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:11:45.884729 kernel: software IO TLB: area num 2. Jul 12 00:11:45.884737 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jul 12 00:11:45.884745 kernel: Memory: 3882804K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213196K reserved, 0K cma-reserved) Jul 12 00:11:45.884753 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:11:45.884761 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:11:45.884769 kernel: rcu: RCU event tracing is enabled. Jul 12 00:11:45.884777 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:11:45.884785 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:11:45.884804 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:11:45.884812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:11:45.884848 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:11:45.884856 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:11:45.884864 kernel: GICv3: 256 SPIs implemented Jul 12 00:11:45.884871 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:11:45.884878 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:11:45.884886 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:11:45.884894 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:11:45.884901 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:11:45.884909 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:11:45.884917 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:11:45.884924 kernel: GICv3: using LPI property table @0x00000001000e0000 Jul 12 00:11:45.884932 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jul 12 00:11:45.884942 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:11:45.884950 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:11:45.884957 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:11:45.884965 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:11:45.884973 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:11:45.884980 kernel: Console: colour dummy device 80x25 Jul 12 00:11:45.884988 kernel: ACPI: Core revision 20230628 Jul 12 00:11:45.884996 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:11:45.885005 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:11:45.885013 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:11:45.885022 kernel: landlock: Up and running. Jul 12 00:11:45.885030 kernel: SELinux: Initializing. Jul 12 00:11:45.885038 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:11:45.885046 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:11:45.885054 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:11:45.885062 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:11:45.885071 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:11:45.885079 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:11:45.885086 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:11:45.885096 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:11:45.885103 kernel: Remapping and enabling EFI services. Jul 12 00:11:45.885112 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:11:45.885121 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:11:45.885129 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:11:45.885136 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jul 12 00:11:45.885144 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:11:45.885152 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:11:45.885160 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:11:45.885167 kernel: SMP: Total of 2 processors activated. Jul 12 00:11:45.885177 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:11:45.885185 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:11:45.885199 kernel: CPU features: detected: Common not Private translations Jul 12 00:11:45.885209 kernel: CPU features: detected: CRC32 instructions Jul 12 00:11:45.885217 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:11:45.885225 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:11:45.885233 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:11:45.885242 kernel: CPU features: detected: Privileged Access Never Jul 12 00:11:45.885250 kernel: CPU features: detected: RAS Extension Support Jul 12 00:11:45.885260 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:11:45.885269 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:11:45.885277 kernel: alternatives: applying system-wide alternatives Jul 12 00:11:45.885285 kernel: devtmpfs: initialized Jul 12 00:11:45.885293 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:11:45.885302 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:11:45.885310 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:11:45.885320 kernel: SMBIOS 3.0.0 present. Jul 12 00:11:45.885328 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jul 12 00:11:45.885337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:11:45.885345 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:11:45.885353 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:11:45.885362 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:11:45.885370 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:11:45.885379 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Jul 12 00:11:45.885387 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:11:45.885397 kernel: cpuidle: using governor menu Jul 12 00:11:45.885405 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:11:45.885414 kernel: ASID allocator initialised with 32768 entries Jul 12 00:11:45.885422 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:11:45.885430 kernel: Serial: AMBA PL011 UART driver Jul 12 00:11:45.885439 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:11:45.885447 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:11:45.885463 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:11:45.885472 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:11:45.885483 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:11:45.885492 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:11:45.885500 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:11:45.885508 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:11:45.885516 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:11:45.885524 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:11:45.885533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:11:45.885541 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:11:45.885549 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:11:45.885559 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:11:45.885567 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:11:45.885576 kernel: ACPI: Interpreter enabled Jul 12 00:11:45.885584 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:11:45.885592 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:11:45.885601 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:11:45.885609 kernel: printk: console [ttyAMA0] enabled Jul 12 00:11:45.885617 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:11:45.885846 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:11:45.885951 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:11:45.886025 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:11:45.886099 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:11:45.886172 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:11:45.886183 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:11:45.886191 kernel: PCI host bridge to bus 0000:00 Jul 12 00:11:45.886270 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:11:45.886337 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:11:45.886401 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:11:45.886511 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:11:45.886613 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:11:45.886720 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jul 12 00:11:45.886805 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jul 12 00:11:45.886883 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jul 12 00:11:45.886956 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.887022 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jul 12 00:11:45.887100 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.887166 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jul 12 00:11:45.887240 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.887308 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jul 12 00:11:45.887379 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.887444 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jul 12 00:11:45.887533 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.887601 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jul 12 00:11:45.887672 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.887743 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jul 12 00:11:45.889910 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.890010 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jul 12 00:11:45.890084 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.890151 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jul 12 00:11:45.890222 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 12 00:11:45.890287 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jul 12 00:11:45.890374 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jul 12 00:11:45.890439 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jul 12 00:11:45.890569 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 12 00:11:45.890642 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jul 12 00:11:45.890709 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:11:45.890774 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 12 00:11:45.890908 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 12 00:11:45.890982 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jul 12 00:11:45.891065 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 12 00:11:45.891135 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jul 12 00:11:45.891208 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jul 12 00:11:45.891301 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 12 00:11:45.891370 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jul 12 00:11:45.891457 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 12 00:11:45.891533 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jul 12 00:11:45.891609 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 12 00:11:45.891677 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jul 12 00:11:45.891743 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jul 12 00:11:45.893905 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 12 00:11:45.894003 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jul 12 00:11:45.894071 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jul 12 00:11:45.894138 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 12 00:11:45.894207 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 12 00:11:45.894271 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jul 12 00:11:45.894335 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jul 12 00:11:45.894407 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 12 00:11:45.894494 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 12 00:11:45.894562 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jul 12 00:11:45.894631 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 12 00:11:45.894697 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jul 12 00:11:45.894760 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 12 00:11:45.897950 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 12 00:11:45.898062 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jul 12 00:11:45.898136 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 12 00:11:45.898204 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 12 00:11:45.898268 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jul 12 00:11:45.898332 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jul 12 00:11:45.898400 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 12 00:11:45.898507 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jul 12 00:11:45.898578 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jul 12 00:11:45.898648 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 12 00:11:45.898712 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jul 12 00:11:45.898775 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jul 12 00:11:45.898858 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 12 00:11:45.898923 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jul 12 00:11:45.898988 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jul 12 00:11:45.899085 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 12 00:11:45.899166 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jul 12 00:11:45.899235 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jul 12 00:11:45.899901 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 12 00:11:45.899982 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:11:45.900061 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 12 00:11:45.900131 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:11:45.900201 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 12 00:11:45.900274 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:11:45.900342 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jul 12 00:11:45.900410 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:11:45.900495 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jul 12 00:11:45.900566 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:11:45.900642 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jul 12 00:11:45.900711 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:11:45.900785 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jul 12 00:11:45.902022 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:11:45.902131 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jul 12 00:11:45.902239 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:11:45.902311 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jul 12 00:11:45.902375 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:11:45.902442 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jul 12 00:11:45.902537 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jul 12 00:11:45.902607 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jul 12 00:11:45.902670 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 12 00:11:45.902735 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jul 12 00:11:45.902824 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 12 00:11:45.902896 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jul 12 00:11:45.902958 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 12 00:11:45.903025 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jul 12 00:11:45.903094 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 12 00:11:45.903160 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jul 12 00:11:45.903232 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 12 00:11:45.903298 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jul 12 00:11:45.903362 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 12 00:11:45.903427 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jul 12 00:11:45.903543 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 12 00:11:45.903614 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jul 12 00:11:45.903682 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 12 00:11:45.903748 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jul 12 00:11:45.906928 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jul 12 00:11:45.907037 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jul 12 00:11:45.907122 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jul 12 00:11:45.907200 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:11:45.907277 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jul 12 00:11:45.907348 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 12 00:11:45.907426 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 12 00:11:45.907517 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jul 12 00:11:45.907591 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:11:45.907667 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jul 12 00:11:45.907744 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 12 00:11:45.907899 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 12 00:11:45.907970 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jul 12 00:11:45.908037 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:11:45.908115 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jul 12 00:11:45.908184 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jul 12 00:11:45.908256 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 12 00:11:45.908326 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 12 00:11:45.908399 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jul 12 00:11:45.908482 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:11:45.908559 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jul 12 00:11:45.908625 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 12 00:11:45.908694 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 12 00:11:45.908766 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jul 12 00:11:45.908912 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:11:45.908985 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jul 12 00:11:45.909057 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 12 00:11:45.909120 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 12 00:11:45.909184 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jul 12 00:11:45.909250 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:11:45.909323 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jul 12 00:11:45.909391 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jul 12 00:11:45.909500 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 12 00:11:45.909576 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 12 00:11:45.909649 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jul 12 00:11:45.909715 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:11:45.909837 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jul 12 00:11:45.909984 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jul 12 00:11:45.910056 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jul 12 00:11:45.910122 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 12 00:11:45.910184 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 12 00:11:45.910245 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jul 12 00:11:45.910312 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:11:45.910378 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 12 00:11:45.910443 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 12 00:11:45.910550 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jul 12 00:11:45.910616 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:11:45.910684 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 12 00:11:45.910749 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jul 12 00:11:45.910878 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jul 12 00:11:45.910953 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:11:45.911018 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:11:45.911074 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:11:45.911130 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:11:45.911203 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 12 00:11:45.911261 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jul 12 00:11:45.911318 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:11:45.911386 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jul 12 00:11:45.911444 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jul 12 00:11:45.911519 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:11:45.911587 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jul 12 00:11:45.911650 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jul 12 00:11:45.911709 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:11:45.911780 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 12 00:11:45.911892 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jul 12 00:11:45.911953 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:11:45.912032 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jul 12 00:11:45.912093 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jul 12 00:11:45.912152 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:11:45.912218 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jul 12 00:11:45.912279 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jul 12 00:11:45.912338 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:11:45.912415 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jul 12 00:11:45.912528 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jul 12 00:11:45.912600 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:11:45.912668 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jul 12 00:11:45.912728 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jul 12 00:11:45.912802 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:11:45.912885 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jul 12 00:11:45.912946 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jul 12 00:11:45.913005 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:11:45.913020 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:11:45.913028 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:11:45.913036 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:11:45.913044 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:11:45.913052 kernel: iommu: Default domain type: Translated Jul 12 00:11:45.913059 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:11:45.913067 kernel: efivars: Registered efivars operations Jul 12 00:11:45.913075 kernel: vgaarb: loaded Jul 12 00:11:45.913082 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:11:45.913093 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:11:45.913100 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:11:45.913108 kernel: pnp: PnP ACPI init Jul 12 00:11:45.913182 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:11:45.913194 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:11:45.913201 kernel: NET: Registered PF_INET protocol family Jul 12 00:11:45.913209 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:11:45.913217 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:11:45.913227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:11:45.913235 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:11:45.913243 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:11:45.913251 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:11:45.913258 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:11:45.913266 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:11:45.913274 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:11:45.913347 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jul 12 00:11:45.913358 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:11:45.913369 kernel: kvm [1]: HYP mode not available Jul 12 00:11:45.913376 kernel: Initialise system trusted keyrings Jul 12 00:11:45.913384 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:11:45.913392 kernel: Key type asymmetric registered Jul 12 00:11:45.913399 kernel: Asymmetric key parser 'x509' registered Jul 12 00:11:45.913407 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:11:45.913415 kernel: io scheduler mq-deadline registered Jul 12 00:11:45.913422 kernel: io scheduler kyber registered Jul 12 00:11:45.913430 kernel: io scheduler bfq registered Jul 12 00:11:45.913442 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:11:45.913536 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jul 12 00:11:45.913603 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jul 12 00:11:45.913668 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.913736 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jul 12 00:11:45.913881 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jul 12 00:11:45.913966 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.914033 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jul 12 00:11:45.914096 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jul 12 00:11:45.914158 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.914225 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jul 12 00:11:45.914288 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jul 12 00:11:45.914354 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.914420 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jul 12 00:11:45.914501 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jul 12 00:11:45.914567 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.914634 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jul 12 00:11:45.914697 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jul 12 00:11:45.914762 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.916013 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jul 12 00:11:45.916102 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jul 12 00:11:45.916170 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.916236 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jul 12 00:11:45.916299 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jul 12 00:11:45.916372 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.916384 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jul 12 00:11:45.916464 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jul 12 00:11:45.916539 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jul 12 00:11:45.916603 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:11:45.916614 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:11:45.916622 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:11:45.916634 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:11:45.916708 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jul 12 00:11:45.916780 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jul 12 00:11:45.916803 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:11:45.916812 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:11:45.916885 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jul 12 00:11:45.916896 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jul 12 00:11:45.916904 kernel: thunder_xcv, ver 1.0 Jul 12 00:11:45.916912 kernel: thunder_bgx, ver 1.0 Jul 12 00:11:45.916923 kernel: nicpf, ver 1.0 Jul 12 00:11:45.916931 kernel: nicvf, ver 1.0 Jul 12 00:11:45.917009 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:11:45.917071 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:11:45 UTC (1752279105) Jul 12 00:11:45.917081 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:11:45.917089 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:11:45.917097 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:11:45.917105 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:11:45.917114 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:11:45.917123 kernel: Segment Routing with IPv6 Jul 12 00:11:45.917130 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:11:45.917138 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:11:45.917145 kernel: Key type dns_resolver registered Jul 12 00:11:45.917153 kernel: registered taskstats version 1 Jul 12 00:11:45.917161 kernel: Loading compiled-in X.509 certificates Jul 12 00:11:45.917169 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:11:45.917176 kernel: Key type .fscrypt registered Jul 12 00:11:45.917185 kernel: Key type fscrypt-provisioning registered Jul 12 00:11:45.917193 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:11:45.917200 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:11:45.917208 kernel: ima: No architecture policies found Jul 12 00:11:45.917216 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:11:45.917224 kernel: clk: Disabling unused clocks Jul 12 00:11:45.917231 kernel: Freeing unused kernel memory: 39424K Jul 12 00:11:45.917239 kernel: Run /init as init process Jul 12 00:11:45.917247 kernel: with arguments: Jul 12 00:11:45.917257 kernel: /init Jul 12 00:11:45.917265 kernel: with environment: Jul 12 00:11:45.917272 kernel: HOME=/ Jul 12 00:11:45.917280 kernel: TERM=linux Jul 12 00:11:45.917288 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:11:45.917298 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:11:45.917308 systemd[1]: Detected virtualization kvm. Jul 12 00:11:45.917317 systemd[1]: Detected architecture arm64. Jul 12 00:11:45.917327 systemd[1]: Running in initrd. Jul 12 00:11:45.917335 systemd[1]: No hostname configured, using default hostname. Jul 12 00:11:45.917343 systemd[1]: Hostname set to . Jul 12 00:11:45.917352 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:11:45.917360 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:11:45.917376 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:11:45.917384 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:11:45.917393 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:11:45.917404 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:11:45.917414 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:11:45.917425 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:11:45.917435 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:11:45.917444 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:11:45.917464 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:11:45.917475 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:11:45.917486 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:11:45.917495 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:11:45.917503 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:11:45.917512 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:11:45.917522 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:11:45.917531 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:11:45.917539 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:11:45.917548 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:11:45.917558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:11:45.917567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:11:45.917577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:11:45.917586 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:11:45.917597 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:11:45.917607 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:11:45.917616 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:11:45.917624 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:11:45.917632 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:11:45.917642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:11:45.917650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:45.917659 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:11:45.917667 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:11:45.917699 systemd-journald[237]: Collecting audit messages is disabled. Jul 12 00:11:45.917721 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:11:45.917730 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:11:45.917739 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:11:45.917749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:45.917759 systemd-journald[237]: Journal started Jul 12 00:11:45.917777 systemd-journald[237]: Runtime Journal (/run/log/journal/70014b0a9a2644cfb21520069025adac) is 8.0M, max 76.6M, 68.6M free. Jul 12 00:11:45.918879 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:11:45.900982 systemd-modules-load[238]: Inserted module 'overlay' Jul 12 00:11:45.920074 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:11:45.923039 kernel: Bridge firewalling registered Jul 12 00:11:45.922786 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 12 00:11:45.934201 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:11:45.935815 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:11:45.937837 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:11:45.948081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:11:45.950534 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:11:45.952229 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:11:45.964856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:45.968592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:11:45.971969 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:11:45.983695 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:11:45.988009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:11:45.997854 dracut-cmdline[272]: dracut-dracut-053 Jul 12 00:11:46.003192 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:11:46.018399 systemd-resolved[273]: Positive Trust Anchors: Jul 12 00:11:46.018413 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:11:46.018445 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:11:46.029024 systemd-resolved[273]: Defaulting to hostname 'linux'. Jul 12 00:11:46.030087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:11:46.031318 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:11:46.093872 kernel: SCSI subsystem initialized Jul 12 00:11:46.098825 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:11:46.105831 kernel: iscsi: registered transport (tcp) Jul 12 00:11:46.119968 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:11:46.120097 kernel: QLogic iSCSI HBA Driver Jul 12 00:11:46.175606 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:11:46.179990 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:11:46.203326 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:11:46.203435 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:11:46.203494 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:11:46.256867 kernel: raid6: neonx8 gen() 15635 MB/s Jul 12 00:11:46.273839 kernel: raid6: neonx4 gen() 15502 MB/s Jul 12 00:11:46.290867 kernel: raid6: neonx2 gen() 13149 MB/s Jul 12 00:11:46.307844 kernel: raid6: neonx1 gen() 10410 MB/s Jul 12 00:11:46.324842 kernel: raid6: int64x8 gen() 6924 MB/s Jul 12 00:11:46.341929 kernel: raid6: int64x4 gen() 7272 MB/s Jul 12 00:11:46.358854 kernel: raid6: int64x2 gen() 6105 MB/s Jul 12 00:11:46.375856 kernel: raid6: int64x1 gen() 5033 MB/s Jul 12 00:11:46.375931 kernel: raid6: using algorithm neonx8 gen() 15635 MB/s Jul 12 00:11:46.392856 kernel: raid6: .... xor() 11859 MB/s, rmw enabled Jul 12 00:11:46.392907 kernel: raid6: using neon recovery algorithm Jul 12 00:11:46.397828 kernel: xor: measuring software checksum speed Jul 12 00:11:46.397875 kernel: 8regs : 19754 MB/sec Jul 12 00:11:46.397901 kernel: 32regs : 17388 MB/sec Jul 12 00:11:46.398860 kernel: arm64_neon : 26901 MB/sec Jul 12 00:11:46.398894 kernel: xor: using function: arm64_neon (26901 MB/sec) Jul 12 00:11:46.450859 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:11:46.468209 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:11:46.476044 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:11:46.490986 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 12 00:11:46.494614 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:11:46.514156 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:11:46.531779 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 12 00:11:46.570222 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:11:46.575971 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:11:46.625088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:11:46.634029 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:11:46.649395 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:11:46.652279 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:11:46.654067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:11:46.654658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:11:46.660013 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:11:46.687864 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:11:46.729312 kernel: scsi host0: Virtio SCSI HBA Jul 12 00:11:46.734832 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 00:11:46.735849 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 12 00:11:46.749034 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:11:46.749627 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:11:46.754088 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:11:46.758421 kernel: ACPI: bus type USB registered Jul 12 00:11:46.758458 kernel: usbcore: registered new interface driver usbfs Jul 12 00:11:46.758469 kernel: usbcore: registered new interface driver hub Jul 12 00:11:46.758479 kernel: usbcore: registered new device driver usb Jul 12 00:11:46.757393 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:11:46.758121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:46.759071 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:46.766150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:46.781208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:46.787741 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:11:46.790940 kernel: sr 0:0:0:0: Power-on or device reset occurred Jul 12 00:11:46.792821 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jul 12 00:11:46.792974 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:11:46.792992 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jul 12 00:11:46.801832 kernel: sd 0:0:0:1: Power-on or device reset occurred Jul 12 00:11:46.802035 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 12 00:11:46.802168 kernel: sd 0:0:0:1: [sda] Write Protect is off Jul 12 00:11:46.802252 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jul 12 00:11:46.802339 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 12 00:11:46.806972 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:11:46.807034 kernel: GPT:17805311 != 80003071 Jul 12 00:11:46.807045 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:11:46.812049 kernel: GPT:17805311 != 80003071 Jul 12 00:11:46.812100 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:11:46.812111 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:11:46.815813 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jul 12 00:11:46.821019 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 12 00:11:46.821224 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 12 00:11:46.823582 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 12 00:11:46.825659 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 12 00:11:46.825847 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 12 00:11:46.825953 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 12 00:11:46.829820 kernel: hub 1-0:1.0: USB hub found Jul 12 00:11:46.829983 kernel: hub 1-0:1.0: 4 ports detected Jul 12 00:11:46.831830 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 12 00:11:46.831980 kernel: hub 2-0:1.0: USB hub found Jul 12 00:11:46.832597 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:11:46.834952 kernel: hub 2-0:1.0: 4 ports detected Jul 12 00:11:46.861824 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (509) Jul 12 00:11:46.864685 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (528) Jul 12 00:11:46.878507 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 12 00:11:46.889128 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 12 00:11:46.889874 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 12 00:11:46.896207 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 12 00:11:46.903175 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 12 00:11:46.915138 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:11:46.924480 disk-uuid[575]: Primary Header is updated. Jul 12 00:11:46.924480 disk-uuid[575]: Secondary Entries is updated. Jul 12 00:11:46.924480 disk-uuid[575]: Secondary Header is updated. Jul 12 00:11:46.931819 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:11:47.074871 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 12 00:11:47.209827 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jul 12 00:11:47.209884 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 12 00:11:47.210066 kernel: usbcore: registered new interface driver usbhid Jul 12 00:11:47.210080 kernel: usbhid: USB HID core driver Jul 12 00:11:47.316865 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jul 12 00:11:47.444832 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jul 12 00:11:47.498879 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jul 12 00:11:47.946083 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:11:47.946150 disk-uuid[577]: The operation has completed successfully. Jul 12 00:11:47.998104 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:11:47.998226 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:11:48.012056 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:11:48.026876 sh[594]: Success Jul 12 00:11:48.038828 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:11:48.102408 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:11:48.104392 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:11:48.106500 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:11:48.127270 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:11:48.127346 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:11:48.127370 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:11:48.128022 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:11:48.128858 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:11:48.135824 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 12 00:11:48.138713 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:11:48.139619 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:11:48.149196 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:11:48.152985 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:11:48.168884 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:48.168950 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:11:48.169815 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:11:48.173820 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:11:48.173867 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:11:48.185773 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:11:48.186936 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:48.193530 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:11:48.200065 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:11:48.300252 ignition[673]: Ignition 2.19.0 Jul 12 00:11:48.300263 ignition[673]: Stage: fetch-offline Jul 12 00:11:48.300297 ignition[673]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:48.300305 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:48.302465 ignition[673]: parsed url from cmdline: "" Jul 12 00:11:48.302472 ignition[673]: no config URL provided Jul 12 00:11:48.302480 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:11:48.302493 ignition[673]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:11:48.302499 ignition[673]: failed to fetch config: resource requires networking Jul 12 00:11:48.302699 ignition[673]: Ignition finished successfully Jul 12 00:11:48.306089 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:11:48.311764 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:11:48.318972 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:11:48.339210 systemd-networkd[783]: lo: Link UP Jul 12 00:11:48.339223 systemd-networkd[783]: lo: Gained carrier Jul 12 00:11:48.340762 systemd-networkd[783]: Enumeration completed Jul 12 00:11:48.341122 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:11:48.341281 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:48.341285 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:11:48.342399 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:48.342403 systemd-networkd[783]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:11:48.343094 systemd[1]: Reached target network.target - Network. Jul 12 00:11:48.343399 systemd-networkd[783]: eth0: Link UP Jul 12 00:11:48.343403 systemd-networkd[783]: eth0: Gained carrier Jul 12 00:11:48.343411 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:48.347568 systemd-networkd[783]: eth1: Link UP Jul 12 00:11:48.347571 systemd-networkd[783]: eth1: Gained carrier Jul 12 00:11:48.347580 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:48.354129 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:11:48.367321 ignition[785]: Ignition 2.19.0 Jul 12 00:11:48.367338 ignition[785]: Stage: fetch Jul 12 00:11:48.367600 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:48.367611 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:48.367720 ignition[785]: parsed url from cmdline: "" Jul 12 00:11:48.367726 ignition[785]: no config URL provided Jul 12 00:11:48.367730 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:11:48.367737 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:11:48.367756 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 12 00:11:48.368532 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 12 00:11:48.377873 systemd-networkd[783]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:11:48.411911 systemd-networkd[783]: eth0: DHCPv4 address 91.99.220.16/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 12 00:11:48.568690 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 12 00:11:48.577466 ignition[785]: GET result: OK Jul 12 00:11:48.577597 ignition[785]: parsing config with SHA512: 1e9237e2c8ff5292e7d15ac5288c3653f9b6fed88a1d2c17a3a5d820524a7d1d3ef44c23b3874af58498927f6ab94d2a930169c662b2e101687c1b1dfebac2bf Jul 12 00:11:48.584294 unknown[785]: fetched base config from "system" Jul 12 00:11:48.584310 unknown[785]: fetched base config from "system" Jul 12 00:11:48.584878 ignition[785]: fetch: fetch complete Jul 12 00:11:48.584324 unknown[785]: fetched user config from "hetzner" Jul 12 00:11:48.584883 ignition[785]: fetch: fetch passed Jul 12 00:11:48.586521 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:11:48.584938 ignition[785]: Ignition finished successfully Jul 12 00:11:48.592953 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:11:48.605939 ignition[792]: Ignition 2.19.0 Jul 12 00:11:48.605949 ignition[792]: Stage: kargs Jul 12 00:11:48.606126 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:48.606136 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:48.607142 ignition[792]: kargs: kargs passed Jul 12 00:11:48.607197 ignition[792]: Ignition finished successfully Jul 12 00:11:48.608835 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:11:48.613991 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:11:48.628202 ignition[799]: Ignition 2.19.0 Jul 12 00:11:48.628213 ignition[799]: Stage: disks Jul 12 00:11:48.628386 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:48.628396 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:48.630776 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:11:48.629356 ignition[799]: disks: disks passed Jul 12 00:11:48.633026 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:11:48.629402 ignition[799]: Ignition finished successfully Jul 12 00:11:48.633940 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:11:48.634782 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:11:48.635905 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:11:48.636855 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:11:48.645052 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:11:48.667919 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 12 00:11:48.672336 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:11:48.683024 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:11:48.734809 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:11:48.736064 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:11:48.737921 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:11:48.751001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:11:48.754963 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:11:48.757621 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:11:48.759915 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:11:48.759949 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:11:48.768896 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (815) Jul 12 00:11:48.771820 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:48.771879 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:11:48.771909 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:11:48.774598 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:11:48.776981 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:11:48.788997 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:11:48.789061 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:11:48.794322 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:11:48.833494 coreos-metadata[817]: Jul 12 00:11:48.833 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 12 00:11:48.835679 coreos-metadata[817]: Jul 12 00:11:48.835 INFO Fetch successful Jul 12 00:11:48.837838 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:11:48.839110 coreos-metadata[817]: Jul 12 00:11:48.838 INFO wrote hostname ci-4081-3-4-n-8926aa35a3 to /sysroot/etc/hostname Jul 12 00:11:48.840374 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:11:48.845204 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:11:48.850334 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:11:48.855872 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:11:48.951270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:11:48.957978 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:11:48.963515 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:11:48.967844 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:48.994100 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:11:48.998825 ignition[932]: INFO : Ignition 2.19.0 Jul 12 00:11:48.998825 ignition[932]: INFO : Stage: mount Jul 12 00:11:48.998825 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:48.998825 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:49.001388 ignition[932]: INFO : mount: mount passed Jul 12 00:11:49.001388 ignition[932]: INFO : Ignition finished successfully Jul 12 00:11:49.002157 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:11:49.009922 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:11:49.128234 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:11:49.142181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:11:49.155503 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Jul 12 00:11:49.155574 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:11:49.155608 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:11:49.156022 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:11:49.160856 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:11:49.160943 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:11:49.164530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:11:49.188955 ignition[960]: INFO : Ignition 2.19.0 Jul 12 00:11:49.188955 ignition[960]: INFO : Stage: files Jul 12 00:11:49.190337 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:49.190337 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:49.190337 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:11:49.192661 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:11:49.192661 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:11:49.195375 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:11:49.196326 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:11:49.196326 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:11:49.195938 unknown[960]: wrote ssh authorized keys file for user: core Jul 12 00:11:49.199224 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:11:49.199224 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:11:49.266711 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:11:49.392699 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:11:49.403398 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:11:49.403398 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:11:49.403398 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:11:49.403398 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:11:49.403398 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:11:49.403398 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:11:49.666002 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 12 00:11:49.876845 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:11:49.876845 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:11:49.881900 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:11:49.881900 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:11:49.881900 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:11:49.881900 ignition[960]: INFO : files: files passed Jul 12 00:11:49.881900 ignition[960]: INFO : Ignition finished successfully Jul 12 00:11:49.883627 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:11:49.894105 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:11:49.895707 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:11:49.903364 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:11:49.903498 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:11:49.914179 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:11:49.914179 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:11:49.916966 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:11:49.919674 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:11:49.921547 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:11:49.927037 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:11:49.969259 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:11:49.969437 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:11:49.971601 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:11:49.973676 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:11:49.975199 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:11:49.981033 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:11:49.994305 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:11:49.999963 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:11:50.013247 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:11:50.014004 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:11:50.015271 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:11:50.016331 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:11:50.016474 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:11:50.017902 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:11:50.018499 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:11:50.019591 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:11:50.020319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:11:50.021468 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:11:50.022476 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:11:50.023548 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:11:50.024680 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:11:50.025727 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:11:50.026760 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:11:50.027694 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:11:50.027841 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:11:50.029061 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:11:50.029725 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:11:50.030673 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:11:50.030749 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:11:50.031744 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:11:50.031875 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:11:50.033273 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:11:50.033389 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:11:50.034666 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:11:50.034766 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:11:50.035604 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:11:50.035704 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:11:50.043094 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:11:50.043592 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:11:50.043721 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:11:50.050375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:11:50.055940 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:11:50.056124 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:11:50.057539 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:11:50.057634 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:11:50.066190 ignition[1013]: INFO : Ignition 2.19.0 Jul 12 00:11:50.066190 ignition[1013]: INFO : Stage: umount Jul 12 00:11:50.066190 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:11:50.066190 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:11:50.067749 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:11:50.071579 ignition[1013]: INFO : umount: umount passed Jul 12 00:11:50.071579 ignition[1013]: INFO : Ignition finished successfully Jul 12 00:11:50.069147 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:11:50.074475 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:11:50.075040 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:11:50.076351 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:11:50.076406 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:11:50.079925 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:11:50.079985 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:11:50.080571 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:11:50.080612 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:11:50.081654 systemd[1]: Stopped target network.target - Network. Jul 12 00:11:50.082486 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:11:50.082536 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:11:50.084878 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:11:50.085674 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:11:50.086230 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:11:50.087832 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:11:50.089883 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:11:50.091920 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:11:50.091966 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:11:50.092537 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:11:50.092570 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:11:50.093451 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:11:50.093506 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:11:50.095384 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:11:50.095446 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:11:50.096636 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:11:50.097958 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:11:50.099398 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:11:50.101965 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:11:50.102272 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:11:50.103814 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:11:50.103865 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:11:50.105283 systemd-networkd[783]: eth0: DHCPv6 lease lost Jul 12 00:11:50.109962 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:11:50.110086 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:11:50.111837 systemd-networkd[783]: eth1: DHCPv6 lease lost Jul 12 00:11:50.112532 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:11:50.112632 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:11:50.115333 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:11:50.115517 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:11:50.116642 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:11:50.116672 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:11:50.126712 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:11:50.128247 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:11:50.128381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:11:50.129611 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:11:50.129654 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:50.131235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:11:50.131283 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:11:50.132524 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:11:50.146957 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:11:50.147087 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:11:50.152821 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:11:50.153026 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:11:50.155225 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:11:50.155276 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:11:50.156932 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:11:50.156967 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:11:50.158575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:11:50.158624 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:11:50.161133 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:11:50.161189 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:11:50.162922 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:11:50.162959 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:11:50.170185 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:11:50.170758 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:11:50.170846 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:11:50.171499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:11:50.171547 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:50.176550 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:11:50.176663 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:11:50.178041 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:11:50.188098 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:11:50.199959 systemd[1]: Switching root. Jul 12 00:11:50.246689 systemd-journald[237]: Journal stopped Jul 12 00:11:51.192610 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 12 00:11:51.192677 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:11:51.192690 kernel: SELinux: policy capability open_perms=1 Jul 12 00:11:51.192704 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:11:51.192713 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:11:51.192722 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:11:51.192736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:11:51.192745 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:11:51.192754 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:11:51.192763 kernel: audit: type=1403 audit(1752279110.413:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:11:51.192777 systemd[1]: Successfully loaded SELinux policy in 37.319ms. Jul 12 00:11:51.192820 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.970ms. Jul 12 00:11:51.192835 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:11:51.192846 systemd[1]: Detected virtualization kvm. Jul 12 00:11:51.192856 systemd[1]: Detected architecture arm64. Jul 12 00:11:51.192865 systemd[1]: Detected first boot. Jul 12 00:11:51.192875 systemd[1]: Hostname set to . Jul 12 00:11:51.192885 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:11:51.192895 zram_generator::config[1056]: No configuration found. Jul 12 00:11:51.192907 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:11:51.192917 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:11:51.193879 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:11:51.193892 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:11:51.193903 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:11:51.193913 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:11:51.193923 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:11:51.193933 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:11:51.193943 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:11:51.193959 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:11:51.193969 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:11:51.193979 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:11:51.193989 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:11:51.193999 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:11:51.194010 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:11:51.194020 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:11:51.194030 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:11:51.194045 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:11:51.194057 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:11:51.194067 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:11:51.194077 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:11:51.194087 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:11:51.194097 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:11:51.194107 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:11:51.194120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:11:51.194134 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:11:51.194145 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:11:51.194155 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:11:51.194165 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:11:51.194175 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:11:51.194185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:11:51.194195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:11:51.194205 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:11:51.194216 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:11:51.194227 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:11:51.194238 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:11:51.194249 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:11:51.194259 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:11:51.194269 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:11:51.194279 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:11:51.194293 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:11:51.194303 systemd[1]: Reached target machines.target - Containers. Jul 12 00:11:51.194315 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:11:51.194325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:51.194335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:11:51.194345 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:11:51.194355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:11:51.194365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:11:51.194375 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:11:51.194386 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:11:51.194414 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:11:51.194430 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:11:51.194445 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:11:51.194455 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:11:51.194465 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:11:51.194475 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:11:51.194486 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:11:51.194496 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:11:51.194507 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:11:51.194517 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:11:51.194527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:11:51.194539 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:11:51.194549 systemd[1]: Stopped verity-setup.service. Jul 12 00:11:51.194560 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:11:51.194570 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:11:51.194582 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:11:51.194592 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:11:51.194602 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:11:51.194613 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:11:51.194624 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:11:51.194635 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:11:51.194645 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:11:51.194655 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:11:51.194665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:11:51.194676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:11:51.194687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:11:51.194697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:11:51.194707 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:11:51.194719 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:11:51.194730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:11:51.194740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:11:51.194751 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:11:51.194762 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:11:51.194773 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:11:51.194785 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:11:51.196122 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:11:51.196142 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:11:51.196154 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:11:51.196164 kernel: fuse: init (API version 7.39) Jul 12 00:11:51.196204 systemd-journald[1137]: Collecting audit messages is disabled. Jul 12 00:11:51.196234 kernel: ACPI: bus type drm_connector registered Jul 12 00:11:51.196248 systemd-journald[1137]: Journal started Jul 12 00:11:51.196270 systemd-journald[1137]: Runtime Journal (/run/log/journal/70014b0a9a2644cfb21520069025adac) is 8.0M, max 76.6M, 68.6M free. Jul 12 00:11:51.200633 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:11:50.899514 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:11:50.927619 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 12 00:11:50.928047 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:11:51.204539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:51.204594 kernel: loop: module loaded Jul 12 00:11:51.208847 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:11:51.208902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:11:51.223784 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:11:51.229624 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:11:51.237423 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:11:51.237496 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:11:51.237818 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:11:51.237965 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:11:51.240156 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:11:51.240310 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:11:51.245937 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:11:51.246103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:11:51.247225 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:11:51.284950 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:11:51.300840 kernel: loop0: detected capacity change from 0 to 8 Jul 12 00:11:51.302046 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:11:51.305221 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:11:51.306270 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:11:51.313360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:11:51.313812 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:11:51.315026 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:11:51.317193 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:11:51.322881 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:11:51.326942 systemd-journald[1137]: Time spent on flushing to /var/log/journal/70014b0a9a2644cfb21520069025adac is 32.187ms for 1126 entries. Jul 12 00:11:51.326942 systemd-journald[1137]: System Journal (/var/log/journal/70014b0a9a2644cfb21520069025adac) is 8.0M, max 584.8M, 576.8M free. Jul 12 00:11:51.367911 systemd-journald[1137]: Received client request to flush runtime journal. Jul 12 00:11:51.367958 kernel: loop1: detected capacity change from 0 to 203944 Jul 12 00:11:51.335993 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:11:51.341784 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:11:51.371530 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:11:51.381537 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:11:51.381864 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:11:51.393123 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:11:51.397706 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:11:51.403194 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:11:51.425933 kernel: loop2: detected capacity change from 0 to 114432 Jul 12 00:11:51.431944 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jul 12 00:11:51.431963 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jul 12 00:11:51.437910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:11:51.465435 kernel: loop3: detected capacity change from 0 to 114328 Jul 12 00:11:51.500853 kernel: loop4: detected capacity change from 0 to 8 Jul 12 00:11:51.503871 kernel: loop5: detected capacity change from 0 to 203944 Jul 12 00:11:51.537204 kernel: loop6: detected capacity change from 0 to 114432 Jul 12 00:11:51.555877 kernel: loop7: detected capacity change from 0 to 114328 Jul 12 00:11:51.568206 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 12 00:11:51.569330 (sd-merge)[1197]: Merged extensions into '/usr'. Jul 12 00:11:51.575082 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:11:51.575104 systemd[1]: Reloading... Jul 12 00:11:51.656909 zram_generator::config[1220]: No configuration found. Jul 12 00:11:51.731773 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:11:51.842174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:11:51.891027 systemd[1]: Reloading finished in 315 ms. Jul 12 00:11:51.915785 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:11:51.920418 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:11:51.928653 systemd[1]: Starting ensure-sysext.service... Jul 12 00:11:51.935215 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:11:51.937952 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:11:51.938071 systemd[1]: Reloading... Jul 12 00:11:51.974218 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:11:51.974884 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:11:51.976507 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:11:51.976901 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 12 00:11:51.977022 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 12 00:11:51.981460 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:11:51.981579 systemd-tmpfiles[1261]: Skipping /boot Jul 12 00:11:51.990077 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:11:51.990231 systemd-tmpfiles[1261]: Skipping /boot Jul 12 00:11:52.023058 zram_generator::config[1287]: No configuration found. Jul 12 00:11:52.125835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:11:52.173760 systemd[1]: Reloading finished in 235 ms. Jul 12 00:11:52.196275 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:11:52.202598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:11:52.226345 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:11:52.232141 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:11:52.236377 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:11:52.241586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:11:52.246965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:11:52.249560 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:11:52.253781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:52.257725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:11:52.262168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:11:52.263993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:11:52.265005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:52.269356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:52.269524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:52.271619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:52.275107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:11:52.275952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:52.279076 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:11:52.284593 systemd[1]: Finished ensure-sysext.service. Jul 12 00:11:52.291005 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:11:52.299994 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:11:52.315118 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:11:52.319988 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:11:52.322222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:11:52.322465 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:11:52.325637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:11:52.328572 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:11:52.330208 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:11:52.332308 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:11:52.332489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:11:52.343247 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:11:52.344150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:11:52.346264 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:11:52.361880 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:11:52.364235 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:11:52.373714 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jul 12 00:11:52.373865 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:11:52.382098 augenrules[1363]: No rules Jul 12 00:11:52.384359 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:11:52.389222 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:11:52.415479 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:11:52.434139 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:11:52.498191 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:11:52.498935 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:11:52.508458 systemd-networkd[1376]: lo: Link UP Jul 12 00:11:52.508469 systemd-networkd[1376]: lo: Gained carrier Jul 12 00:11:52.509059 systemd-networkd[1376]: Enumeration completed Jul 12 00:11:52.509142 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:11:52.510163 systemd-timesyncd[1345]: No network connectivity, watching for changes. Jul 12 00:11:52.520107 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:11:52.527747 systemd-resolved[1331]: Positive Trust Anchors: Jul 12 00:11:52.527761 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:11:52.527811 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:11:52.536976 systemd-resolved[1331]: Using system hostname 'ci-4081-3-4-n-8926aa35a3'. Jul 12 00:11:52.544831 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:11:52.557700 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:11:52.558438 systemd[1]: Reached target network.target - Network. Jul 12 00:11:52.558909 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:11:52.619822 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:11:52.636019 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:52.636031 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:11:52.637252 systemd-networkd[1376]: eth1: Link UP Jul 12 00:11:52.637262 systemd-networkd[1376]: eth1: Gained carrier Jul 12 00:11:52.637279 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:52.646520 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:52.646532 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:11:52.647612 systemd-networkd[1376]: eth0: Link UP Jul 12 00:11:52.647621 systemd-networkd[1376]: eth0: Gained carrier Jul 12 00:11:52.647636 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:11:52.667845 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 12 00:11:52.668563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:11:52.672720 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:11:52.673474 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jul 12 00:11:52.694780 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1383) Jul 12 00:11:52.692209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:11:52.699005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:11:52.702989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:11:52.704434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:11:52.704471 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:11:52.704902 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:11:52.705303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:11:52.714300 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:11:52.714851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:11:52.720575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:11:52.721191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:11:52.734030 systemd-networkd[1376]: eth0: DHCPv4 address 91.99.220.16/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 12 00:11:52.736010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:11:52.736222 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jul 12 00:11:52.736304 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:11:52.764239 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jul 12 00:11:52.764338 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 12 00:11:52.764355 kernel: [drm] features: -context_init Jul 12 00:11:52.767894 kernel: [drm] number of scanouts: 1 Jul 12 00:11:52.767988 kernel: [drm] number of cap sets: 0 Jul 12 00:11:52.775828 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 12 00:11:52.778862 kernel: Console: switching to colour frame buffer device 160x50 Jul 12 00:11:52.791954 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 12 00:11:52.791205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:52.799329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 12 00:11:52.807132 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:11:52.809432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:11:52.810053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:52.819116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:11:52.821264 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:11:52.881970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:11:52.936723 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:11:52.943075 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:11:52.958837 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:11:52.986903 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:11:52.989534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:11:52.991159 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:11:52.992205 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:11:52.993014 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:11:52.993965 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:11:52.994705 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:11:52.995570 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:11:52.996350 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:11:52.996474 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:11:52.997013 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:11:52.998424 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:11:53.001891 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:11:53.007036 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:11:53.009759 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:11:53.012287 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:11:53.013874 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:11:53.014484 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:11:53.015145 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:11:53.015286 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:11:53.017032 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:11:53.021037 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:11:53.025973 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:11:53.032100 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:11:53.039974 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:11:53.047993 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:11:53.049304 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:11:53.052081 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:11:53.056011 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:11:53.072895 coreos-metadata[1447]: Jul 12 00:11:53.066 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 12 00:11:53.072895 coreos-metadata[1447]: Jul 12 00:11:53.070 INFO Fetch successful Jul 12 00:11:53.072895 coreos-metadata[1447]: Jul 12 00:11:53.070 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 12 00:11:53.072895 coreos-metadata[1447]: Jul 12 00:11:53.070 INFO Fetch successful Jul 12 00:11:53.074062 jq[1449]: false Jul 12 00:11:53.067150 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 12 00:11:53.073048 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:11:53.082980 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:11:53.091963 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:11:53.093528 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:11:53.095066 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:11:53.095993 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:11:53.104210 dbus-daemon[1448]: [system] SELinux support is enabled Jul 12 00:11:53.099917 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:11:53.102862 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:11:53.105004 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:11:53.125511 extend-filesystems[1452]: Found loop4 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found loop5 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found loop6 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found loop7 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found sda Jul 12 00:11:53.125511 extend-filesystems[1452]: Found sda1 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found sda2 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found sda3 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found usr Jul 12 00:11:53.125511 extend-filesystems[1452]: Found sda4 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found sda6 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found sda7 Jul 12 00:11:53.125511 extend-filesystems[1452]: Found sda9 Jul 12 00:11:53.125511 extend-filesystems[1452]: Checking size of /dev/sda9 Jul 12 00:11:53.119295 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:11:53.188276 extend-filesystems[1452]: Resized partition /dev/sda9 Jul 12 00:11:53.119472 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:11:53.191015 update_engine[1463]: I20250712 00:11:53.182057 1463 main.cc:92] Flatcar Update Engine starting Jul 12 00:11:53.191015 update_engine[1463]: I20250712 00:11:53.189981 1463 update_check_scheduler.cc:74] Next update check in 6m6s Jul 12 00:11:53.205012 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 12 00:11:53.205138 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:11:53.123317 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:11:53.126219 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:11:53.134464 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:11:53.217578 jq[1464]: true Jul 12 00:11:53.134507 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:11:53.135563 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:11:53.135580 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:11:53.166520 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:11:53.220142 jq[1489]: true Jul 12 00:11:53.166742 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:11:53.195185 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:11:53.198853 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:11:53.201630 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:11:53.205524 systemd-logind[1460]: New seat seat0. Jul 12 00:11:53.213915 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:11:53.225886 tar[1470]: linux-arm64/helm Jul 12 00:11:53.213932 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jul 12 00:11:53.214177 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:11:53.274203 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1393) Jul 12 00:11:53.339457 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:11:53.341553 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:11:53.360615 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 12 00:11:53.362492 extend-filesystems[1487]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 12 00:11:53.362492 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 12 00:11:53.362492 extend-filesystems[1487]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 12 00:11:53.370887 extend-filesystems[1452]: Resized filesystem in /dev/sda9 Jul 12 00:11:53.370887 extend-filesystems[1452]: Found sr0 Jul 12 00:11:53.365300 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:11:53.375032 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:11:53.365559 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:11:53.377295 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:11:53.394337 systemd[1]: Starting sshkeys.service... Jul 12 00:11:53.416331 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 12 00:11:53.419295 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 12 00:11:53.490319 coreos-metadata[1527]: Jul 12 00:11:53.490 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 12 00:11:53.491898 coreos-metadata[1527]: Jul 12 00:11:53.491 INFO Fetch successful Jul 12 00:11:53.495863 unknown[1527]: wrote ssh authorized keys file for user: core Jul 12 00:11:53.543282 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:11:53.544272 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 12 00:11:53.549858 systemd[1]: Finished sshkeys.service. Jul 12 00:11:53.583871 containerd[1485]: time="2025-07-12T00:11:53.583749920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:11:53.600009 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:11:53.656803 containerd[1485]: time="2025-07-12T00:11:53.654631320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:53.659372 containerd[1485]: time="2025-07-12T00:11:53.659325600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:53.659437 containerd[1485]: time="2025-07-12T00:11:53.659370640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:11:53.659437 containerd[1485]: time="2025-07-12T00:11:53.659399840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:11:53.659593 containerd[1485]: time="2025-07-12T00:11:53.659567520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:11:53.659628 containerd[1485]: time="2025-07-12T00:11:53.659591440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:53.659668 containerd[1485]: time="2025-07-12T00:11:53.659650560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:53.659698 containerd[1485]: time="2025-07-12T00:11:53.659669960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:53.659896 containerd[1485]: time="2025-07-12T00:11:53.659873160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:53.659932 containerd[1485]: time="2025-07-12T00:11:53.659895840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:53.659932 containerd[1485]: time="2025-07-12T00:11:53.659909320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:53.659932 containerd[1485]: time="2025-07-12T00:11:53.659918560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:53.660008 containerd[1485]: time="2025-07-12T00:11:53.659990280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:53.660202 containerd[1485]: time="2025-07-12T00:11:53.660182800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:11:53.660312 containerd[1485]: time="2025-07-12T00:11:53.660291880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:11:53.660342 containerd[1485]: time="2025-07-12T00:11:53.660311200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:11:53.660415 containerd[1485]: time="2025-07-12T00:11:53.660394000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:11:53.660461 containerd[1485]: time="2025-07-12T00:11:53.660445200Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:11:53.667781 containerd[1485]: time="2025-07-12T00:11:53.667737640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:11:53.667955 containerd[1485]: time="2025-07-12T00:11:53.667930080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:11:53.667982 containerd[1485]: time="2025-07-12T00:11:53.667954520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:11:53.668775 containerd[1485]: time="2025-07-12T00:11:53.667971680Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:11:53.668814 containerd[1485]: time="2025-07-12T00:11:53.668785400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:11:53.668995 containerd[1485]: time="2025-07-12T00:11:53.668972280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:11:53.670800 containerd[1485]: time="2025-07-12T00:11:53.670337120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:11:53.673266 containerd[1485]: time="2025-07-12T00:11:53.673238560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:11:53.673329 containerd[1485]: time="2025-07-12T00:11:53.673269000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:11:53.673329 containerd[1485]: time="2025-07-12T00:11:53.673284760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:11:53.673329 containerd[1485]: time="2025-07-12T00:11:53.673300520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:11:53.673329 containerd[1485]: time="2025-07-12T00:11:53.673314160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:11:53.673329 containerd[1485]: time="2025-07-12T00:11:53.673326720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:11:53.673424 containerd[1485]: time="2025-07-12T00:11:53.673340960Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:11:53.673424 containerd[1485]: time="2025-07-12T00:11:53.673356440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:11:53.673424 containerd[1485]: time="2025-07-12T00:11:53.673370520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:11:53.673424 containerd[1485]: time="2025-07-12T00:11:53.673398040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:11:53.673424 containerd[1485]: time="2025-07-12T00:11:53.673411280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:11:53.673518 containerd[1485]: time="2025-07-12T00:11:53.673435200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673518 containerd[1485]: time="2025-07-12T00:11:53.673449680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673518 containerd[1485]: time="2025-07-12T00:11:53.673462200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673518 containerd[1485]: time="2025-07-12T00:11:53.673475880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673518 containerd[1485]: time="2025-07-12T00:11:53.673487640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673518 containerd[1485]: time="2025-07-12T00:11:53.673501480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673518 containerd[1485]: time="2025-07-12T00:11:53.673513280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673637 containerd[1485]: time="2025-07-12T00:11:53.673526840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673637 containerd[1485]: time="2025-07-12T00:11:53.673539760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673637 containerd[1485]: time="2025-07-12T00:11:53.673553680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673637 containerd[1485]: time="2025-07-12T00:11:53.673566000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673637 containerd[1485]: time="2025-07-12T00:11:53.673577880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673637 containerd[1485]: time="2025-07-12T00:11:53.673589800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673637 containerd[1485]: time="2025-07-12T00:11:53.673605280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:11:53.673637 containerd[1485]: time="2025-07-12T00:11:53.673630200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673768 containerd[1485]: time="2025-07-12T00:11:53.673642320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.673768 containerd[1485]: time="2025-07-12T00:11:53.673653640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:11:53.676671 containerd[1485]: time="2025-07-12T00:11:53.676639880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:11:53.676721 containerd[1485]: time="2025-07-12T00:11:53.676682240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:11:53.676721 containerd[1485]: time="2025-07-12T00:11:53.676695680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:11:53.676721 containerd[1485]: time="2025-07-12T00:11:53.676708280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:11:53.676721 containerd[1485]: time="2025-07-12T00:11:53.676720040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.676819 containerd[1485]: time="2025-07-12T00:11:53.676737640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:11:53.676819 containerd[1485]: time="2025-07-12T00:11:53.676748760Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:11:53.676819 containerd[1485]: time="2025-07-12T00:11:53.676767760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:11:53.677121 containerd[1485]: time="2025-07-12T00:11:53.677060280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:11:53.677232 containerd[1485]: time="2025-07-12T00:11:53.677129600Z" level=info msg="Connect containerd service" Jul 12 00:11:53.677232 containerd[1485]: time="2025-07-12T00:11:53.677167880Z" level=info msg="using legacy CRI server" Jul 12 00:11:53.677232 containerd[1485]: time="2025-07-12T00:11:53.677174800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:11:53.677288 containerd[1485]: time="2025-07-12T00:11:53.677263080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:11:53.683801 containerd[1485]: time="2025-07-12T00:11:53.682047560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:11:53.684097 containerd[1485]: time="2025-07-12T00:11:53.684048160Z" level=info msg="Start subscribing containerd event" Jul 12 00:11:53.685454 containerd[1485]: time="2025-07-12T00:11:53.684161240Z" level=info msg="Start recovering state" Jul 12 00:11:53.685454 containerd[1485]: time="2025-07-12T00:11:53.684247800Z" level=info msg="Start event monitor" Jul 12 00:11:53.685454 containerd[1485]: time="2025-07-12T00:11:53.684261800Z" level=info msg="Start snapshots syncer" Jul 12 00:11:53.685454 containerd[1485]: time="2025-07-12T00:11:53.684271680Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:11:53.685454 containerd[1485]: time="2025-07-12T00:11:53.684280240Z" level=info msg="Start streaming server" Jul 12 00:11:53.685454 containerd[1485]: time="2025-07-12T00:11:53.684410160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:11:53.685454 containerd[1485]: time="2025-07-12T00:11:53.684473240Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:11:53.684622 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:11:53.685699 containerd[1485]: time="2025-07-12T00:11:53.685655520Z" level=info msg="containerd successfully booted in 0.104921s" Jul 12 00:11:53.837173 tar[1470]: linux-arm64/LICENSE Jul 12 00:11:53.837297 tar[1470]: linux-arm64/README.md Jul 12 00:11:53.849760 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:11:53.990035 systemd-networkd[1376]: eth0: Gained IPv6LL Jul 12 00:11:53.990678 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jul 12 00:11:53.996054 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:11:53.997612 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:11:54.010588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:11:54.014951 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:11:54.050855 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:11:54.234529 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:11:54.262839 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:11:54.272177 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:11:54.281402 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:11:54.281754 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:11:54.294906 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:11:54.309432 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:11:54.317285 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:11:54.326486 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:11:54.327995 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:11:54.694002 systemd-networkd[1376]: eth1: Gained IPv6LL Jul 12 00:11:54.694880 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Jul 12 00:11:54.871264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:11:54.872970 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:11:54.874028 systemd[1]: Startup finished in 785ms (kernel) + 4.722s (initrd) + 4.496s (userspace) = 10.004s. Jul 12 00:11:54.883481 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:11:55.433732 kubelet[1579]: E0712 00:11:55.433614 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:11:55.438227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:11:55.438453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:05.564133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:12:05.578204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:05.707191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:05.716160 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:05.770169 kubelet[1598]: E0712 00:12:05.770080 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:05.775738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:05.776063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:15.814476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:12:15.821032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:15.985068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:15.985285 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:16.036391 kubelet[1613]: E0712 00:12:16.036300 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:16.039225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:16.039362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:24.842112 systemd-timesyncd[1345]: Contacted time server 88.99.66.3:123 (2.flatcar.pool.ntp.org). Jul 12 00:12:24.842244 systemd-timesyncd[1345]: Initial clock synchronization to Sat 2025-07-12 00:12:24.959228 UTC. Jul 12 00:12:26.064449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:12:26.075200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:26.216900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:26.221666 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:26.268302 kubelet[1627]: E0712 00:12:26.268225 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:26.272491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:26.272846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:36.314257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 12 00:12:36.326177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:36.444419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:36.449758 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:36.496170 kubelet[1642]: E0712 00:12:36.496055 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:36.500127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:36.500271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:38.915978 update_engine[1463]: I20250712 00:12:38.915174 1463 update_attempter.cc:509] Updating boot flags... Jul 12 00:12:38.965892 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1658) Jul 12 00:12:39.012410 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1657) Jul 12 00:12:39.506202 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:12:39.512308 systemd[1]: Started sshd@0-91.99.220.16:22-139.178.68.195:35702.service - OpenSSH per-connection server daemon (139.178.68.195:35702). Jul 12 00:12:40.502506 sshd[1668]: Accepted publickey for core from 139.178.68.195 port 35702 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:40.506752 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:40.518224 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:12:40.523178 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:12:40.525238 systemd-logind[1460]: New session 1 of user core. Jul 12 00:12:40.538539 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:12:40.550445 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:12:40.555464 (systemd)[1672]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:12:40.666119 systemd[1672]: Queued start job for default target default.target. Jul 12 00:12:40.674944 systemd[1672]: Created slice app.slice - User Application Slice. Jul 12 00:12:40.675005 systemd[1672]: Reached target paths.target - Paths. Jul 12 00:12:40.675034 systemd[1672]: Reached target timers.target - Timers. Jul 12 00:12:40.677208 systemd[1672]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:12:40.691062 systemd[1672]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:12:40.691431 systemd[1672]: Reached target sockets.target - Sockets. Jul 12 00:12:40.691551 systemd[1672]: Reached target basic.target - Basic System. Jul 12 00:12:40.691696 systemd[1672]: Reached target default.target - Main User Target. Jul 12 00:12:40.691863 systemd[1672]: Startup finished in 129ms. Jul 12 00:12:40.692415 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:12:40.701236 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:12:41.417233 systemd[1]: Started sshd@1-91.99.220.16:22-139.178.68.195:35712.service - OpenSSH per-connection server daemon (139.178.68.195:35712). Jul 12 00:12:42.411050 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 35712 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:42.413294 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:42.419163 systemd-logind[1460]: New session 2 of user core. Jul 12 00:12:42.429266 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:12:43.102111 sshd[1683]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:43.106675 systemd[1]: sshd@1-91.99.220.16:22-139.178.68.195:35712.service: Deactivated successfully. Jul 12 00:12:43.108429 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:12:43.110155 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:12:43.111373 systemd-logind[1460]: Removed session 2. Jul 12 00:12:43.270101 systemd[1]: Started sshd@2-91.99.220.16:22-139.178.68.195:35724.service - OpenSSH per-connection server daemon (139.178.68.195:35724). Jul 12 00:12:44.252414 sshd[1690]: Accepted publickey for core from 139.178.68.195 port 35724 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:44.254541 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:44.260465 systemd-logind[1460]: New session 3 of user core. Jul 12 00:12:44.271170 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:12:44.927722 sshd[1690]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:44.933649 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:12:44.934710 systemd[1]: sshd@2-91.99.220.16:22-139.178.68.195:35724.service: Deactivated successfully. Jul 12 00:12:44.936963 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:12:44.939549 systemd-logind[1460]: Removed session 3. Jul 12 00:12:45.104257 systemd[1]: Started sshd@3-91.99.220.16:22-139.178.68.195:35738.service - OpenSSH per-connection server daemon (139.178.68.195:35738). Jul 12 00:12:46.078419 sshd[1697]: Accepted publickey for core from 139.178.68.195 port 35738 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:46.080509 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:46.085012 systemd-logind[1460]: New session 4 of user core. Jul 12 00:12:46.092857 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:12:46.564560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 12 00:12:46.572172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:46.689930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:46.697096 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:46.745490 kubelet[1709]: E0712 00:12:46.745443 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:46.749086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:46.749303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:46.758070 sshd[1697]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:46.762218 systemd[1]: sshd@3-91.99.220.16:22-139.178.68.195:35738.service: Deactivated successfully. Jul 12 00:12:46.763729 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:12:46.764534 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:12:46.765652 systemd-logind[1460]: Removed session 4. Jul 12 00:12:46.946991 systemd[1]: Started sshd@4-91.99.220.16:22-139.178.68.195:35752.service - OpenSSH per-connection server daemon (139.178.68.195:35752). Jul 12 00:12:47.939931 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 35752 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:47.941940 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:47.947124 systemd-logind[1460]: New session 5 of user core. Jul 12 00:12:47.954120 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:12:48.479620 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:12:48.480327 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:12:48.495924 sudo[1722]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:48.658787 sshd[1719]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:48.665926 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:12:48.667703 systemd[1]: sshd@4-91.99.220.16:22-139.178.68.195:35752.service: Deactivated successfully. Jul 12 00:12:48.670048 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:12:48.672093 systemd-logind[1460]: Removed session 5. Jul 12 00:12:48.829178 systemd[1]: Started sshd@5-91.99.220.16:22-139.178.68.195:54652.service - OpenSSH per-connection server daemon (139.178.68.195:54652). Jul 12 00:12:49.809065 sshd[1727]: Accepted publickey for core from 139.178.68.195 port 54652 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:49.812590 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:49.817743 systemd-logind[1460]: New session 6 of user core. Jul 12 00:12:49.828143 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:12:50.335070 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:12:50.335370 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:12:50.338949 sudo[1731]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:50.344755 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:12:50.345128 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:12:50.360162 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:12:50.363912 auditctl[1734]: No rules Jul 12 00:12:50.364238 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:12:50.364405 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:12:50.370589 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:12:50.408286 augenrules[1752]: No rules Jul 12 00:12:50.410073 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:12:50.411879 sudo[1730]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:50.571510 sshd[1727]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:50.577512 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:12:50.577940 systemd[1]: sshd@5-91.99.220.16:22-139.178.68.195:54652.service: Deactivated successfully. Jul 12 00:12:50.579914 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:12:50.582695 systemd-logind[1460]: Removed session 6. Jul 12 00:12:50.747318 systemd[1]: Started sshd@6-91.99.220.16:22-139.178.68.195:54660.service - OpenSSH per-connection server daemon (139.178.68.195:54660). Jul 12 00:12:51.725337 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 54660 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:12:51.727227 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:12:51.733869 systemd-logind[1460]: New session 7 of user core. Jul 12 00:12:51.743133 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:12:52.245673 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:12:52.245985 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:12:52.552257 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:12:52.552645 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:12:52.794350 dockerd[1778]: time="2025-07-12T00:12:52.793829326Z" level=info msg="Starting up" Jul 12 00:12:52.873651 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3540216133-merged.mount: Deactivated successfully. Jul 12 00:12:52.899667 dockerd[1778]: time="2025-07-12T00:12:52.899587715Z" level=info msg="Loading containers: start." Jul 12 00:12:53.019863 kernel: Initializing XFRM netlink socket Jul 12 00:12:53.107180 systemd-networkd[1376]: docker0: Link UP Jul 12 00:12:53.125594 dockerd[1778]: time="2025-07-12T00:12:53.125455254Z" level=info msg="Loading containers: done." Jul 12 00:12:53.143114 dockerd[1778]: time="2025-07-12T00:12:53.143028662Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:12:53.143416 dockerd[1778]: time="2025-07-12T00:12:53.143174753Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:12:53.143416 dockerd[1778]: time="2025-07-12T00:12:53.143317283Z" level=info msg="Daemon has completed initialization" Jul 12 00:12:53.184824 dockerd[1778]: time="2025-07-12T00:12:53.184711239Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:12:53.185165 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:12:54.255433 containerd[1485]: time="2025-07-12T00:12:54.255325373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:12:54.882085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760628455.mount: Deactivated successfully. Jul 12 00:12:55.790255 containerd[1485]: time="2025-07-12T00:12:55.790196331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:55.792540 containerd[1485]: time="2025-07-12T00:12:55.791828206Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651885" Jul 12 00:12:55.792540 containerd[1485]: time="2025-07-12T00:12:55.792472138Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:55.799963 containerd[1485]: time="2025-07-12T00:12:55.799889639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:55.801403 containerd[1485]: time="2025-07-12T00:12:55.801346308Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.545975202s" Jul 12 00:12:55.801403 containerd[1485]: time="2025-07-12T00:12:55.801396961Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:12:55.805298 containerd[1485]: time="2025-07-12T00:12:55.805077824Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:12:56.813605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 12 00:12:56.822091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:56.948037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:56.960336 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:57.002838 kubelet[1983]: E0712 00:12:57.002318 1983 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:57.005492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:57.005831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:57.015053 containerd[1485]: time="2025-07-12T00:12:57.014994318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:57.017603 containerd[1485]: time="2025-07-12T00:12:57.017561603Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459697" Jul 12 00:12:57.019060 containerd[1485]: time="2025-07-12T00:12:57.019014580Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:57.023362 containerd[1485]: time="2025-07-12T00:12:57.023311138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:57.025075 containerd[1485]: time="2025-07-12T00:12:57.025024248Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.219907934s" Jul 12 00:12:57.025075 containerd[1485]: time="2025-07-12T00:12:57.025065617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:12:57.026313 containerd[1485]: time="2025-07-12T00:12:57.026099108Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:12:58.064465 containerd[1485]: time="2025-07-12T00:12:58.064368922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:58.066474 containerd[1485]: time="2025-07-12T00:12:58.066396974Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125086" Jul 12 00:12:58.067272 containerd[1485]: time="2025-07-12T00:12:58.066888904Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:58.070678 containerd[1485]: time="2025-07-12T00:12:58.070598223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:58.072396 containerd[1485]: time="2025-07-12T00:12:58.072213359Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.04605964s" Jul 12 00:12:58.072396 containerd[1485]: time="2025-07-12T00:12:58.072257887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:12:58.073245 containerd[1485]: time="2025-07-12T00:12:58.072926450Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:12:59.117460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809430814.mount: Deactivated successfully. Jul 12 00:12:59.512922 containerd[1485]: time="2025-07-12T00:12:59.512739807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:59.515319 containerd[1485]: time="2025-07-12T00:12:59.515276886Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915983" Jul 12 00:12:59.516741 containerd[1485]: time="2025-07-12T00:12:59.516703652Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:59.520577 containerd[1485]: time="2025-07-12T00:12:59.520532275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:59.521316 containerd[1485]: time="2025-07-12T00:12:59.521281244Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.448223091s" Jul 12 00:12:59.521468 containerd[1485]: time="2025-07-12T00:12:59.521448793Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:12:59.522182 containerd[1485]: time="2025-07-12T00:12:59.522089224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:13:00.153643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3433893833.mount: Deactivated successfully. Jul 12 00:13:00.929101 containerd[1485]: time="2025-07-12T00:13:00.929011857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:00.930967 containerd[1485]: time="2025-07-12T00:13:00.930913448Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jul 12 00:13:00.931900 containerd[1485]: time="2025-07-12T00:13:00.931441294Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:00.936387 containerd[1485]: time="2025-07-12T00:13:00.936324373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:00.938179 containerd[1485]: time="2025-07-12T00:13:00.938048255Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.415765837s" Jul 12 00:13:00.938179 containerd[1485]: time="2025-07-12T00:13:00.938087341Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:13:00.939629 containerd[1485]: time="2025-07-12T00:13:00.939584626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:13:01.484982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559053663.mount: Deactivated successfully. Jul 12 00:13:01.492308 containerd[1485]: time="2025-07-12T00:13:01.492167259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:01.493242 containerd[1485]: time="2025-07-12T00:13:01.492911454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jul 12 00:13:01.495830 containerd[1485]: time="2025-07-12T00:13:01.494076394Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:01.498512 containerd[1485]: time="2025-07-12T00:13:01.498445389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:01.500046 containerd[1485]: time="2025-07-12T00:13:01.499993349Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 560.357314ms" Jul 12 00:13:01.500213 containerd[1485]: time="2025-07-12T00:13:01.500189539Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:13:01.500892 containerd[1485]: time="2025-07-12T00:13:01.500859723Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:13:02.167675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847897192.mount: Deactivated successfully. Jul 12 00:13:03.552579 containerd[1485]: time="2025-07-12T00:13:03.552467629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:03.555107 containerd[1485]: time="2025-07-12T00:13:03.555059548Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" Jul 12 00:13:03.555771 containerd[1485]: time="2025-07-12T00:13:03.555564978Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:03.561275 containerd[1485]: time="2025-07-12T00:13:03.561210198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:03.564900 containerd[1485]: time="2025-07-12T00:13:03.564658315Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.063639929s" Jul 12 00:13:03.564900 containerd[1485]: time="2025-07-12T00:13:03.564728805Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:13:07.064325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 12 00:13:07.074215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:13:07.205002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:13:07.220338 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:13:07.271648 kubelet[2139]: E0712 00:13:07.271602 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:13:07.274962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:13:07.275251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:13:09.924202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:13:09.931064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:13:09.969893 systemd[1]: Reloading requested from client PID 2153 ('systemctl') (unit session-7.scope)... Jul 12 00:13:09.970040 systemd[1]: Reloading... Jul 12 00:13:10.081817 zram_generator::config[2199]: No configuration found. Jul 12 00:13:10.166247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:13:10.238956 systemd[1]: Reloading finished in 268 ms. Jul 12 00:13:10.297547 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:13:10.298414 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:13:10.299387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:13:10.307601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:13:10.447238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:13:10.458388 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:13:10.510335 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:13:10.510335 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:13:10.510335 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:13:10.510335 kubelet[2242]: I0712 00:13:10.509903 2242 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:13:11.349431 kubelet[2242]: I0712 00:13:11.349380 2242 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:13:11.350855 kubelet[2242]: I0712 00:13:11.349578 2242 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:13:11.350855 kubelet[2242]: I0712 00:13:11.349887 2242 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:13:11.377412 kubelet[2242]: E0712 00:13:11.377342 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.220.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.220.16:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:13:11.378716 kubelet[2242]: I0712 00:13:11.378671 2242 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:13:11.393450 kubelet[2242]: E0712 00:13:11.393400 2242 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:13:11.393450 kubelet[2242]: I0712 00:13:11.393442 2242 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:13:11.398225 kubelet[2242]: I0712 00:13:11.398192 2242 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:13:11.399231 kubelet[2242]: I0712 00:13:11.399134 2242 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:13:11.399369 kubelet[2242]: I0712 00:13:11.399295 2242 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:13:11.399608 kubelet[2242]: I0712 00:13:11.399323 2242 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-n-8926aa35a3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:13:11.399893 kubelet[2242]: I0712 00:13:11.399652 2242 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:13:11.399893 kubelet[2242]: I0712 00:13:11.399663 2242 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:13:11.400048 kubelet[2242]: I0712 00:13:11.399992 2242 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:13:11.403133 kubelet[2242]: I0712 00:13:11.402895 2242 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:13:11.403133 kubelet[2242]: I0712 00:13:11.402931 2242 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:13:11.403133 kubelet[2242]: I0712 00:13:11.402955 2242 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:13:11.403133 kubelet[2242]: I0712 00:13:11.402966 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:13:11.406821 kubelet[2242]: W0712 00:13:11.406340 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.220.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-8926aa35a3&limit=500&resourceVersion=0": dial tcp 91.99.220.16:6443: connect: connection refused Jul 12 00:13:11.406821 kubelet[2242]: E0712 00:13:11.406421 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.220.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-8926aa35a3&limit=500&resourceVersion=0\": dial tcp 91.99.220.16:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:13:11.406981 kubelet[2242]: I0712 00:13:11.406957 2242 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:13:11.407723 kubelet[2242]: I0712 00:13:11.407695 2242 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:13:11.407905 kubelet[2242]: W0712 00:13:11.407886 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:13:11.410000 kubelet[2242]: I0712 00:13:11.409977 2242 server.go:1274] "Started kubelet" Jul 12 00:13:11.415066 kubelet[2242]: I0712 00:13:11.415032 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:13:11.416344 kubelet[2242]: W0712 00:13:11.415997 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.220.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.220.16:6443: connect: connection refused Jul 12 00:13:11.416344 kubelet[2242]: E0712 00:13:11.416071 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.220.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.220.16:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:13:11.418575 kubelet[2242]: E0712 00:13:11.416134 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.220.16:6443/api/v1/namespaces/default/events\": dial tcp 91.99.220.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-n-8926aa35a3.185158a3ff4c8795 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-n-8926aa35a3,UID:ci-4081-3-4-n-8926aa35a3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-n-8926aa35a3,},FirstTimestamp:2025-07-12 00:13:11.409952661 +0000 UTC m=+0.945482264,LastTimestamp:2025-07-12 00:13:11.409952661 +0000 UTC m=+0.945482264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-n-8926aa35a3,}" Jul 12 00:13:11.422814 kubelet[2242]: I0712 00:13:11.422424 2242 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:13:11.423660 kubelet[2242]: I0712 00:13:11.423641 2242 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:13:11.424108 kubelet[2242]: I0712 00:13:11.424078 2242 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:13:11.424482 kubelet[2242]: E0712 00:13:11.424447 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-8926aa35a3\" not found" Jul 12 00:13:11.424866 kubelet[2242]: I0712 00:13:11.424823 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:13:11.425148 kubelet[2242]: I0712 00:13:11.425133 2242 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:13:11.425463 kubelet[2242]: I0712 00:13:11.425445 2242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:13:11.427353 kubelet[2242]: I0712 00:13:11.427328 2242 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:13:11.427425 kubelet[2242]: I0712 00:13:11.427401 2242 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:13:11.427816 kubelet[2242]: E0712 00:13:11.427765 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.220.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-8926aa35a3?timeout=10s\": dial tcp 91.99.220.16:6443: connect: connection refused" interval="200ms" Jul 12 00:13:11.428486 kubelet[2242]: I0712 00:13:11.428140 2242 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:13:11.428486 kubelet[2242]: I0712 00:13:11.428217 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:13:11.429693 kubelet[2242]: E0712 00:13:11.429655 2242 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:13:11.430486 kubelet[2242]: I0712 00:13:11.430467 2242 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:13:11.439475 kubelet[2242]: I0712 00:13:11.439303 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:13:11.441339 kubelet[2242]: I0712 00:13:11.441298 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:13:11.441339 kubelet[2242]: I0712 00:13:11.441335 2242 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:13:11.441442 kubelet[2242]: I0712 00:13:11.441358 2242 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:13:11.441442 kubelet[2242]: E0712 00:13:11.441409 2242 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:13:11.450870 kubelet[2242]: W0712 00:13:11.450785 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.220.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.220.16:6443: connect: connection refused Jul 12 00:13:11.450995 kubelet[2242]: E0712 00:13:11.450876 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.220.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.220.16:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:13:11.453898 kubelet[2242]: W0712 00:13:11.453396 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.220.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.220.16:6443: connect: connection refused Jul 12 00:13:11.453898 kubelet[2242]: E0712 00:13:11.453460 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.220.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.220.16:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:13:11.455314 kubelet[2242]: I0712 00:13:11.454999 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:13:11.455314 kubelet[2242]: I0712 00:13:11.455016 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:13:11.455314 kubelet[2242]: I0712 00:13:11.455032 2242 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:13:11.457949 kubelet[2242]: I0712 00:13:11.457915 2242 policy_none.go:49] "None policy: Start" Jul 12 00:13:11.458692 kubelet[2242]: I0712 00:13:11.458667 2242 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:13:11.458692 kubelet[2242]: I0712 00:13:11.458695 2242 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:13:11.466843 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:13:11.482343 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:13:11.486609 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:13:11.499822 kubelet[2242]: I0712 00:13:11.498562 2242 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:13:11.499822 kubelet[2242]: I0712 00:13:11.498914 2242 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:13:11.499822 kubelet[2242]: I0712 00:13:11.498935 2242 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:13:11.502927 kubelet[2242]: I0712 00:13:11.502881 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:13:11.504356 kubelet[2242]: E0712 00:13:11.504337 2242 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-n-8926aa35a3\" not found" Jul 12 00:13:11.556067 systemd[1]: Created slice kubepods-burstable-podfd0ee363f6f11376f20733bcf4cc38b8.slice - libcontainer container kubepods-burstable-podfd0ee363f6f11376f20733bcf4cc38b8.slice. Jul 12 00:13:11.577685 systemd[1]: Created slice kubepods-burstable-pod4337ad33fb9163677a63c6f2a669e816.slice - libcontainer container kubepods-burstable-pod4337ad33fb9163677a63c6f2a669e816.slice. Jul 12 00:13:11.591331 systemd[1]: Created slice kubepods-burstable-podcf3048f9764f57568d0b8e60fcfab80e.slice - libcontainer container kubepods-burstable-podcf3048f9764f57568d0b8e60fcfab80e.slice. Jul 12 00:13:11.602717 kubelet[2242]: I0712 00:13:11.602153 2242 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.603109 kubelet[2242]: E0712 00:13:11.602916 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.220.16:6443/api/v1/nodes\": dial tcp 91.99.220.16:6443: connect: connection refused" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.628813 kubelet[2242]: E0712 00:13:11.628665 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.220.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-8926aa35a3?timeout=10s\": dial tcp 91.99.220.16:6443: connect: connection refused" interval="400ms" Jul 12 00:13:11.729716 kubelet[2242]: I0712 00:13:11.729648 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.729716 kubelet[2242]: I0712 00:13:11.729720 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.730267 kubelet[2242]: I0712 00:13:11.729825 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.730267 kubelet[2242]: I0712 00:13:11.729868 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf3048f9764f57568d0b8e60fcfab80e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-n-8926aa35a3\" (UID: \"cf3048f9764f57568d0b8e60fcfab80e\") " pod="kube-system/kube-scheduler-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.730267 kubelet[2242]: I0712 00:13:11.729900 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd0ee363f6f11376f20733bcf4cc38b8-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-n-8926aa35a3\" (UID: \"fd0ee363f6f11376f20733bcf4cc38b8\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.730267 kubelet[2242]: I0712 00:13:11.729928 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd0ee363f6f11376f20733bcf4cc38b8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-n-8926aa35a3\" (UID: \"fd0ee363f6f11376f20733bcf4cc38b8\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.730267 kubelet[2242]: I0712 00:13:11.729960 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.730556 kubelet[2242]: I0712 00:13:11.729993 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd0ee363f6f11376f20733bcf4cc38b8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-n-8926aa35a3\" (UID: \"fd0ee363f6f11376f20733bcf4cc38b8\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.730556 kubelet[2242]: I0712 00:13:11.730022 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.806608 kubelet[2242]: I0712 00:13:11.806055 2242 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.806608 kubelet[2242]: E0712 00:13:11.806484 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.220.16:6443/api/v1/nodes\": dial tcp 91.99.220.16:6443: connect: connection refused" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:11.873274 containerd[1485]: time="2025-07-12T00:13:11.873119485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-n-8926aa35a3,Uid:fd0ee363f6f11376f20733bcf4cc38b8,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:11.890012 containerd[1485]: time="2025-07-12T00:13:11.889862804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-n-8926aa35a3,Uid:4337ad33fb9163677a63c6f2a669e816,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:11.895209 containerd[1485]: time="2025-07-12T00:13:11.894761569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-n-8926aa35a3,Uid:cf3048f9764f57568d0b8e60fcfab80e,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:12.029827 kubelet[2242]: E0712 00:13:12.029698 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.220.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-8926aa35a3?timeout=10s\": dial tcp 91.99.220.16:6443: connect: connection refused" interval="800ms" Jul 12 00:13:12.209846 kubelet[2242]: I0712 00:13:12.209626 2242 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:12.210195 kubelet[2242]: E0712 00:13:12.210152 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.99.220.16:6443/api/v1/nodes\": dial tcp 91.99.220.16:6443: connect: connection refused" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:12.441337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193893857.mount: Deactivated successfully. Jul 12 00:13:12.450045 containerd[1485]: time="2025-07-12T00:13:12.449969071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:13:12.451470 containerd[1485]: time="2025-07-12T00:13:12.451322388Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:13:12.452437 containerd[1485]: time="2025-07-12T00:13:12.452382720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:13:12.453445 containerd[1485]: time="2025-07-12T00:13:12.453340242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jul 12 00:13:12.456087 containerd[1485]: time="2025-07-12T00:13:12.454701480Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:13:12.456087 containerd[1485]: time="2025-07-12T00:13:12.455935146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:13:12.458125 containerd[1485]: time="2025-07-12T00:13:12.457881034Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:13:12.460674 containerd[1485]: time="2025-07-12T00:13:12.460533343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:13:12.462171 containerd[1485]: time="2025-07-12T00:13:12.462128921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.128664ms" Jul 12 00:13:12.464598 containerd[1485]: time="2025-07-12T00:13:12.464543850Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.615626ms" Jul 12 00:13:12.477973 containerd[1485]: time="2025-07-12T00:13:12.477896842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.601931ms" Jul 12 00:13:12.498132 kubelet[2242]: W0712 00:13:12.498044 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.220.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.220.16:6443: connect: connection refused Jul 12 00:13:12.498810 kubelet[2242]: E0712 00:13:12.498575 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.220.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.220.16:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:13:12.511612 kubelet[2242]: W0712 00:13:12.511482 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.220.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-8926aa35a3&limit=500&resourceVersion=0": dial tcp 91.99.220.16:6443: connect: connection refused Jul 12 00:13:12.511612 kubelet[2242]: E0712 00:13:12.511567 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.220.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-8926aa35a3&limit=500&resourceVersion=0\": dial tcp 91.99.220.16:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:13:12.583994 containerd[1485]: time="2025-07-12T00:13:12.583678335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:12.583994 containerd[1485]: time="2025-07-12T00:13:12.583741301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:12.583994 containerd[1485]: time="2025-07-12T00:13:12.583756902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:12.584242 containerd[1485]: time="2025-07-12T00:13:12.584210541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:12.589238 containerd[1485]: time="2025-07-12T00:13:12.588991234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:12.589238 containerd[1485]: time="2025-07-12T00:13:12.589040678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:12.589238 containerd[1485]: time="2025-07-12T00:13:12.589059120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:12.589238 containerd[1485]: time="2025-07-12T00:13:12.589139207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:12.589894 containerd[1485]: time="2025-07-12T00:13:12.589669132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:12.589894 containerd[1485]: time="2025-07-12T00:13:12.589718377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:12.589894 containerd[1485]: time="2025-07-12T00:13:12.589736538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:12.590576 containerd[1485]: time="2025-07-12T00:13:12.590055246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:12.613585 systemd[1]: Started cri-containerd-73fc88e90ba1a51a304401df7796eecbf0a53ff3ecdc85d0d1191ca49265d158.scope - libcontainer container 73fc88e90ba1a51a304401df7796eecbf0a53ff3ecdc85d0d1191ca49265d158. Jul 12 00:13:12.624010 systemd[1]: Started cri-containerd-b0804572ccb5e552a7cee2db1d760933e24a7de17192a92130585ee307c52fa9.scope - libcontainer container b0804572ccb5e552a7cee2db1d760933e24a7de17192a92130585ee307c52fa9. Jul 12 00:13:12.630504 systemd[1]: Started cri-containerd-678b9f210148f2111892787cf5df662f87f7eaa334af6f89510d14d2d8609f26.scope - libcontainer container 678b9f210148f2111892787cf5df662f87f7eaa334af6f89510d14d2d8609f26. Jul 12 00:13:12.683508 containerd[1485]: time="2025-07-12T00:13:12.683390944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-n-8926aa35a3,Uid:fd0ee363f6f11376f20733bcf4cc38b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"73fc88e90ba1a51a304401df7796eecbf0a53ff3ecdc85d0d1191ca49265d158\"" Jul 12 00:13:12.689377 containerd[1485]: time="2025-07-12T00:13:12.689255250Z" level=info msg="CreateContainer within sandbox \"73fc88e90ba1a51a304401df7796eecbf0a53ff3ecdc85d0d1191ca49265d158\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:13:12.694056 containerd[1485]: time="2025-07-12T00:13:12.693808803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-n-8926aa35a3,Uid:4337ad33fb9163677a63c6f2a669e816,Namespace:kube-system,Attempt:0,} returns sandbox id \"678b9f210148f2111892787cf5df662f87f7eaa334af6f89510d14d2d8609f26\"" Jul 12 00:13:12.697345 containerd[1485]: time="2025-07-12T00:13:12.697223018Z" level=info msg="CreateContainer within sandbox \"678b9f210148f2111892787cf5df662f87f7eaa334af6f89510d14d2d8609f26\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:13:12.702115 containerd[1485]: time="2025-07-12T00:13:12.702080277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-n-8926aa35a3,Uid:cf3048f9764f57568d0b8e60fcfab80e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0804572ccb5e552a7cee2db1d760933e24a7de17192a92130585ee307c52fa9\"" Jul 12 00:13:12.705605 containerd[1485]: time="2025-07-12T00:13:12.705513454Z" level=info msg="CreateContainer within sandbox \"b0804572ccb5e552a7cee2db1d760933e24a7de17192a92130585ee307c52fa9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:13:12.719321 containerd[1485]: time="2025-07-12T00:13:12.718241073Z" level=info msg="CreateContainer within sandbox \"73fc88e90ba1a51a304401df7796eecbf0a53ff3ecdc85d0d1191ca49265d158\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a110220b1435ff6127c4708afc09e4f10c32bbb7c13e5cde53b4022ca220a52\"" Jul 12 00:13:12.721080 containerd[1485]: time="2025-07-12T00:13:12.721009592Z" level=info msg="StartContainer for \"5a110220b1435ff6127c4708afc09e4f10c32bbb7c13e5cde53b4022ca220a52\"" Jul 12 00:13:12.724599 containerd[1485]: time="2025-07-12T00:13:12.724328958Z" level=info msg="CreateContainer within sandbox \"678b9f210148f2111892787cf5df662f87f7eaa334af6f89510d14d2d8609f26\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"85cea8667dafbec0af0f787c92e4efb67e83b9288f17097fb0c2a0b083f1a3d2\"" Jul 12 00:13:12.725354 containerd[1485]: time="2025-07-12T00:13:12.725328164Z" level=info msg="StartContainer for \"85cea8667dafbec0af0f787c92e4efb67e83b9288f17097fb0c2a0b083f1a3d2\"" Jul 12 00:13:12.728929 containerd[1485]: time="2025-07-12T00:13:12.728889032Z" level=info msg="CreateContainer within sandbox \"b0804572ccb5e552a7cee2db1d760933e24a7de17192a92130585ee307c52fa9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"069e3fc69969df3a986b9c066fe9e6dab330ea5ee8bc547dfb162b3b95f17dd3\"" Jul 12 00:13:12.730898 containerd[1485]: time="2025-07-12T00:13:12.729567250Z" level=info msg="StartContainer for \"069e3fc69969df3a986b9c066fe9e6dab330ea5ee8bc547dfb162b3b95f17dd3\"" Jul 12 00:13:12.763961 systemd[1]: Started cri-containerd-5a110220b1435ff6127c4708afc09e4f10c32bbb7c13e5cde53b4022ca220a52.scope - libcontainer container 5a110220b1435ff6127c4708afc09e4f10c32bbb7c13e5cde53b4022ca220a52. Jul 12 00:13:12.780998 systemd[1]: Started cri-containerd-85cea8667dafbec0af0f787c92e4efb67e83b9288f17097fb0c2a0b083f1a3d2.scope - libcontainer container 85cea8667dafbec0af0f787c92e4efb67e83b9288f17097fb0c2a0b083f1a3d2. Jul 12 00:13:12.790016 systemd[1]: Started cri-containerd-069e3fc69969df3a986b9c066fe9e6dab330ea5ee8bc547dfb162b3b95f17dd3.scope - libcontainer container 069e3fc69969df3a986b9c066fe9e6dab330ea5ee8bc547dfb162b3b95f17dd3. Jul 12 00:13:12.830928 kubelet[2242]: E0712 00:13:12.830485 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.220.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-8926aa35a3?timeout=10s\": dial tcp 91.99.220.16:6443: connect: connection refused" interval="1.6s" Jul 12 00:13:12.842741 containerd[1485]: time="2025-07-12T00:13:12.842589408Z" level=info msg="StartContainer for \"5a110220b1435ff6127c4708afc09e4f10c32bbb7c13e5cde53b4022ca220a52\" returns successfully" Jul 12 00:13:12.854172 containerd[1485]: time="2025-07-12T00:13:12.854131845Z" level=info msg="StartContainer for \"85cea8667dafbec0af0f787c92e4efb67e83b9288f17097fb0c2a0b083f1a3d2\" returns successfully" Jul 12 00:13:12.854379 containerd[1485]: time="2025-07-12T00:13:12.854131885Z" level=info msg="StartContainer for \"069e3fc69969df3a986b9c066fe9e6dab330ea5ee8bc547dfb162b3b95f17dd3\" returns successfully" Jul 12 00:13:13.013053 kubelet[2242]: I0712 00:13:13.012492 2242 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:15.747123 kubelet[2242]: E0712 00:13:15.747079 2242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-n-8926aa35a3\" not found" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:15.814499 kubelet[2242]: I0712 00:13:15.813312 2242 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:15.814499 kubelet[2242]: E0712 00:13:15.813360 2242 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-n-8926aa35a3\": node \"ci-4081-3-4-n-8926aa35a3\" not found" Jul 12 00:13:16.417546 kubelet[2242]: I0712 00:13:16.417150 2242 apiserver.go:52] "Watching apiserver" Jul 12 00:13:16.428155 kubelet[2242]: I0712 00:13:16.428112 2242 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:13:17.909862 systemd[1]: Reloading requested from client PID 2516 ('systemctl') (unit session-7.scope)... Jul 12 00:13:17.910233 systemd[1]: Reloading... Jul 12 00:13:18.000832 zram_generator::config[2562]: No configuration found. Jul 12 00:13:18.104099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:13:18.189915 systemd[1]: Reloading finished in 279 ms. Jul 12 00:13:18.237673 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:13:18.252726 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:13:18.253196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:13:18.253333 systemd[1]: kubelet.service: Consumed 1.400s CPU time, 129.8M memory peak, 0B memory swap peak. Jul 12 00:13:18.260723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:13:18.400390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:13:18.415461 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:13:18.477504 kubelet[2601]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:13:18.477504 kubelet[2601]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:13:18.477504 kubelet[2601]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:13:18.477504 kubelet[2601]: I0712 00:13:18.477300 2601 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:13:18.487949 kubelet[2601]: I0712 00:13:18.487495 2601 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:13:18.487949 kubelet[2601]: I0712 00:13:18.487537 2601 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:13:18.487949 kubelet[2601]: I0712 00:13:18.487944 2601 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:13:18.490783 kubelet[2601]: I0712 00:13:18.490743 2601 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:13:18.495768 kubelet[2601]: I0712 00:13:18.495579 2601 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:13:18.500020 kubelet[2601]: E0712 00:13:18.499960 2601 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:13:18.500020 kubelet[2601]: I0712 00:13:18.500005 2601 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:13:18.502072 kubelet[2601]: I0712 00:13:18.502041 2601 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:13:18.502208 kubelet[2601]: I0712 00:13:18.502172 2601 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:13:18.502375 kubelet[2601]: I0712 00:13:18.502325 2601 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:13:18.502609 kubelet[2601]: I0712 00:13:18.502357 2601 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-n-8926aa35a3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:13:18.502609 kubelet[2601]: I0712 00:13:18.502603 2601 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:13:18.502609 kubelet[2601]: I0712 00:13:18.502614 2601 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:13:18.502788 kubelet[2601]: I0712 00:13:18.502649 2601 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:13:18.502788 kubelet[2601]: I0712 00:13:18.502746 2601 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:13:18.502788 kubelet[2601]: I0712 00:13:18.502760 2601 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:13:18.502788 kubelet[2601]: I0712 00:13:18.502780 2601 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:13:18.503894 kubelet[2601]: I0712 00:13:18.503853 2601 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:13:18.506068 kubelet[2601]: I0712 00:13:18.506037 2601 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:13:18.507418 kubelet[2601]: I0712 00:13:18.507372 2601 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:13:18.508631 kubelet[2601]: I0712 00:13:18.508511 2601 server.go:1274] "Started kubelet" Jul 12 00:13:18.510878 kubelet[2601]: I0712 00:13:18.510848 2601 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:13:18.515059 kubelet[2601]: I0712 00:13:18.515018 2601 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:13:18.515867 kubelet[2601]: I0712 00:13:18.515846 2601 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:13:18.518825 kubelet[2601]: I0712 00:13:18.518029 2601 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:13:18.518825 kubelet[2601]: I0712 00:13:18.518296 2601 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:13:18.518825 kubelet[2601]: I0712 00:13:18.518769 2601 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:13:18.526495 kubelet[2601]: I0712 00:13:18.526457 2601 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:13:18.526946 kubelet[2601]: E0712 00:13:18.526918 2601 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-8926aa35a3\" not found" Jul 12 00:13:18.532572 kubelet[2601]: I0712 00:13:18.532526 2601 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:13:18.532910 kubelet[2601]: I0712 00:13:18.532872 2601 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:13:18.565835 kubelet[2601]: I0712 00:13:18.562718 2601 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:13:18.565835 kubelet[2601]: I0712 00:13:18.562822 2601 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:13:18.570329 kubelet[2601]: I0712 00:13:18.570280 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:13:18.572826 kubelet[2601]: E0712 00:13:18.570706 2601 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:13:18.573402 kubelet[2601]: I0712 00:13:18.573023 2601 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:13:18.573402 kubelet[2601]: I0712 00:13:18.573056 2601 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:13:18.573402 kubelet[2601]: I0712 00:13:18.573077 2601 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:13:18.573402 kubelet[2601]: E0712 00:13:18.573119 2601 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:13:18.576492 kubelet[2601]: I0712 00:13:18.576455 2601 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:13:18.646195 kubelet[2601]: I0712 00:13:18.646163 2601 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:13:18.646905 kubelet[2601]: I0712 00:13:18.646880 2601 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:13:18.647045 kubelet[2601]: I0712 00:13:18.647035 2601 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:13:18.647313 kubelet[2601]: I0712 00:13:18.647293 2601 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:13:18.647505 kubelet[2601]: I0712 00:13:18.647374 2601 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:13:18.647505 kubelet[2601]: I0712 00:13:18.647403 2601 policy_none.go:49] "None policy: Start" Jul 12 00:13:18.648495 kubelet[2601]: I0712 00:13:18.648473 2601 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:13:18.648763 kubelet[2601]: I0712 00:13:18.648729 2601 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:13:18.649703 kubelet[2601]: I0712 00:13:18.649068 2601 state_mem.go:75] "Updated machine memory state" Jul 12 00:13:18.655018 kubelet[2601]: I0712 00:13:18.654971 2601 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:13:18.655177 kubelet[2601]: I0712 00:13:18.655160 2601 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:13:18.655212 kubelet[2601]: I0712 00:13:18.655178 2601 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:13:18.655714 kubelet[2601]: I0712 00:13:18.655684 2601 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:13:18.766893 kubelet[2601]: I0712 00:13:18.766664 2601 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.778048 kubelet[2601]: I0712 00:13:18.777998 2601 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.778215 kubelet[2601]: I0712 00:13:18.778104 2601 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834617 kubelet[2601]: I0712 00:13:18.833965 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd0ee363f6f11376f20733bcf4cc38b8-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-n-8926aa35a3\" (UID: \"fd0ee363f6f11376f20733bcf4cc38b8\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834617 kubelet[2601]: I0712 00:13:18.834014 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd0ee363f6f11376f20733bcf4cc38b8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-n-8926aa35a3\" (UID: \"fd0ee363f6f11376f20733bcf4cc38b8\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834617 kubelet[2601]: I0712 00:13:18.834042 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834617 kubelet[2601]: I0712 00:13:18.834068 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834617 kubelet[2601]: I0712 00:13:18.834092 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd0ee363f6f11376f20733bcf4cc38b8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-n-8926aa35a3\" (UID: \"fd0ee363f6f11376f20733bcf4cc38b8\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834975 kubelet[2601]: I0712 00:13:18.834122 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834975 kubelet[2601]: I0712 00:13:18.834145 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834975 kubelet[2601]: I0712 00:13:18.834167 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4337ad33fb9163677a63c6f2a669e816-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-n-8926aa35a3\" (UID: \"4337ad33fb9163677a63c6f2a669e816\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:18.834975 kubelet[2601]: I0712 00:13:18.834194 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf3048f9764f57568d0b8e60fcfab80e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-n-8926aa35a3\" (UID: \"cf3048f9764f57568d0b8e60fcfab80e\") " pod="kube-system/kube-scheduler-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:19.506207 kubelet[2601]: I0712 00:13:19.506122 2601 apiserver.go:52] "Watching apiserver" Jul 12 00:13:19.532777 kubelet[2601]: I0712 00:13:19.532738 2601 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:13:19.617230 kubelet[2601]: E0712 00:13:19.616962 2601 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-4-n-8926aa35a3\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:19.671356 kubelet[2601]: I0712 00:13:19.671187 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-n-8926aa35a3" podStartSLOduration=1.6711617410000001 podStartE2EDuration="1.671161741s" podCreationTimestamp="2025-07-12 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:13:19.648086703 +0000 UTC m=+1.225079807" watchObservedRunningTime="2025-07-12 00:13:19.671161741 +0000 UTC m=+1.248154885" Jul 12 00:13:19.694718 kubelet[2601]: I0712 00:13:19.693526 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-8926aa35a3" podStartSLOduration=1.693506255 podStartE2EDuration="1.693506255s" podCreationTimestamp="2025-07-12 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:13:19.672089999 +0000 UTC m=+1.249083183" watchObservedRunningTime="2025-07-12 00:13:19.693506255 +0000 UTC m=+1.270499359" Jul 12 00:13:19.717562 kubelet[2601]: I0712 00:13:19.717398 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-n-8926aa35a3" podStartSLOduration=1.717382704 podStartE2EDuration="1.717382704s" podCreationTimestamp="2025-07-12 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:13:19.697137761 +0000 UTC m=+1.274130865" watchObservedRunningTime="2025-07-12 00:13:19.717382704 +0000 UTC m=+1.294375768" Jul 12 00:13:23.500662 kubelet[2601]: I0712 00:13:23.500369 2601 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:13:23.501866 containerd[1485]: time="2025-07-12T00:13:23.501684549Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:13:23.503065 kubelet[2601]: I0712 00:13:23.502114 2601 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:13:24.283754 systemd[1]: Created slice kubepods-besteffort-pod2bbf664b_90cd_4667_b519_7416c0833861.slice - libcontainer container kubepods-besteffort-pod2bbf664b_90cd_4667_b519_7416c0833861.slice. Jul 12 00:13:24.370885 kubelet[2601]: I0712 00:13:24.370578 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bbf664b-90cd-4667-b519-7416c0833861-lib-modules\") pod \"kube-proxy-cjrmm\" (UID: \"2bbf664b-90cd-4667-b519-7416c0833861\") " pod="kube-system/kube-proxy-cjrmm" Jul 12 00:13:24.370885 kubelet[2601]: I0712 00:13:24.370657 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r67gf\" (UniqueName: \"kubernetes.io/projected/2bbf664b-90cd-4667-b519-7416c0833861-kube-api-access-r67gf\") pod \"kube-proxy-cjrmm\" (UID: \"2bbf664b-90cd-4667-b519-7416c0833861\") " pod="kube-system/kube-proxy-cjrmm" Jul 12 00:13:24.370885 kubelet[2601]: I0712 00:13:24.370703 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2bbf664b-90cd-4667-b519-7416c0833861-kube-proxy\") pod \"kube-proxy-cjrmm\" (UID: \"2bbf664b-90cd-4667-b519-7416c0833861\") " pod="kube-system/kube-proxy-cjrmm" Jul 12 00:13:24.370885 kubelet[2601]: I0712 00:13:24.370737 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bbf664b-90cd-4667-b519-7416c0833861-xtables-lock\") pod \"kube-proxy-cjrmm\" (UID: \"2bbf664b-90cd-4667-b519-7416c0833861\") " pod="kube-system/kube-proxy-cjrmm" Jul 12 00:13:24.597266 containerd[1485]: time="2025-07-12T00:13:24.597210065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cjrmm,Uid:2bbf664b-90cd-4667-b519-7416c0833861,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:24.620728 systemd[1]: Created slice kubepods-besteffort-pod08596bf7_9df5_4f46_bbc4_4f6ff1814ef7.slice - libcontainer container kubepods-besteffort-pod08596bf7_9df5_4f46_bbc4_4f6ff1814ef7.slice. Jul 12 00:13:24.643706 containerd[1485]: time="2025-07-12T00:13:24.643591540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:24.643706 containerd[1485]: time="2025-07-12T00:13:24.643714266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:24.643906 containerd[1485]: time="2025-07-12T00:13:24.643743027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:24.643906 containerd[1485]: time="2025-07-12T00:13:24.643874794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:24.672227 kubelet[2601]: I0712 00:13:24.672084 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzvm\" (UniqueName: \"kubernetes.io/projected/08596bf7-9df5-4f46-bbc4-4f6ff1814ef7-kube-api-access-9lzvm\") pod \"tigera-operator-5bf8dfcb4-6mbrg\" (UID: \"08596bf7-9df5-4f46-bbc4-4f6ff1814ef7\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-6mbrg" Jul 12 00:13:24.672227 kubelet[2601]: I0712 00:13:24.672129 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08596bf7-9df5-4f46-bbc4-4f6ff1814ef7-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-6mbrg\" (UID: \"08596bf7-9df5-4f46-bbc4-4f6ff1814ef7\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-6mbrg" Jul 12 00:13:24.681630 systemd[1]: Started cri-containerd-944b70d40b329f9ad67fd6665b65b8245942e820ab0b5ac4f944a94479170790.scope - libcontainer container 944b70d40b329f9ad67fd6665b65b8245942e820ab0b5ac4f944a94479170790. Jul 12 00:13:24.716597 containerd[1485]: time="2025-07-12T00:13:24.716098461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cjrmm,Uid:2bbf664b-90cd-4667-b519-7416c0833861,Namespace:kube-system,Attempt:0,} returns sandbox id \"944b70d40b329f9ad67fd6665b65b8245942e820ab0b5ac4f944a94479170790\"" Jul 12 00:13:24.727176 containerd[1485]: time="2025-07-12T00:13:24.727004895Z" level=info msg="CreateContainer within sandbox \"944b70d40b329f9ad67fd6665b65b8245942e820ab0b5ac4f944a94479170790\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:13:24.748562 containerd[1485]: time="2025-07-12T00:13:24.748395781Z" level=info msg="CreateContainer within sandbox \"944b70d40b329f9ad67fd6665b65b8245942e820ab0b5ac4f944a94479170790\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b0c568a9dd1145c48a9b95fb8ecbc21c89bc7980684127acb3707ec3d39d0e68\"" Jul 12 00:13:24.749609 containerd[1485]: time="2025-07-12T00:13:24.749547279Z" level=info msg="StartContainer for \"b0c568a9dd1145c48a9b95fb8ecbc21c89bc7980684127acb3707ec3d39d0e68\"" Jul 12 00:13:24.785961 systemd[1]: Started cri-containerd-b0c568a9dd1145c48a9b95fb8ecbc21c89bc7980684127acb3707ec3d39d0e68.scope - libcontainer container b0c568a9dd1145c48a9b95fb8ecbc21c89bc7980684127acb3707ec3d39d0e68. Jul 12 00:13:24.818140 containerd[1485]: time="2025-07-12T00:13:24.818075878Z" level=info msg="StartContainer for \"b0c568a9dd1145c48a9b95fb8ecbc21c89bc7980684127acb3707ec3d39d0e68\" returns successfully" Jul 12 00:13:24.931206 containerd[1485]: time="2025-07-12T00:13:24.931071055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-6mbrg,Uid:08596bf7-9df5-4f46-bbc4-4f6ff1814ef7,Namespace:tigera-operator,Attempt:0,}" Jul 12 00:13:24.966431 containerd[1485]: time="2025-07-12T00:13:24.964385187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:24.966431 containerd[1485]: time="2025-07-12T00:13:24.964439710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:24.966431 containerd[1485]: time="2025-07-12T00:13:24.964464111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:24.966431 containerd[1485]: time="2025-07-12T00:13:24.964590517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:24.991170 systemd[1]: Started cri-containerd-70a0ef081e3c9598e2133cad0c5110df85484e1a2f283878d009b48f361fb6ad.scope - libcontainer container 70a0ef081e3c9598e2133cad0c5110df85484e1a2f283878d009b48f361fb6ad. Jul 12 00:13:25.046041 containerd[1485]: time="2025-07-12T00:13:25.045934721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-6mbrg,Uid:08596bf7-9df5-4f46-bbc4-4f6ff1814ef7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"70a0ef081e3c9598e2133cad0c5110df85484e1a2f283878d009b48f361fb6ad\"" Jul 12 00:13:25.048393 containerd[1485]: time="2025-07-12T00:13:25.048305517Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 12 00:13:26.568629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089891493.mount: Deactivated successfully. Jul 12 00:13:26.681599 kubelet[2601]: I0712 00:13:26.681443 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cjrmm" podStartSLOduration=2.680480111 podStartE2EDuration="2.680480111s" podCreationTimestamp="2025-07-12 00:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:13:25.643456803 +0000 UTC m=+7.220449907" watchObservedRunningTime="2025-07-12 00:13:26.680480111 +0000 UTC m=+8.257473215" Jul 12 00:13:27.089971 containerd[1485]: time="2025-07-12T00:13:27.089215570Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:27.091695 containerd[1485]: time="2025-07-12T00:13:27.091377229Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 12 00:13:27.093255 containerd[1485]: time="2025-07-12T00:13:27.093207472Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:27.096594 containerd[1485]: time="2025-07-12T00:13:27.096535983Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:27.097993 containerd[1485]: time="2025-07-12T00:13:27.097913846Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.049448041s" Jul 12 00:13:27.097993 containerd[1485]: time="2025-07-12T00:13:27.097961728Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 12 00:13:27.104327 containerd[1485]: time="2025-07-12T00:13:27.104159409Z" level=info msg="CreateContainer within sandbox \"70a0ef081e3c9598e2133cad0c5110df85484e1a2f283878d009b48f361fb6ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 12 00:13:27.119278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475900380.mount: Deactivated successfully. Jul 12 00:13:27.120519 containerd[1485]: time="2025-07-12T00:13:27.120439029Z" level=info msg="CreateContainer within sandbox \"70a0ef081e3c9598e2133cad0c5110df85484e1a2f283878d009b48f361fb6ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e7e409018896cbacf42e2002056cc802d3b3afd729e074cda520ff0d460fa556\"" Jul 12 00:13:27.121336 containerd[1485]: time="2025-07-12T00:13:27.120991054Z" level=info msg="StartContainer for \"e7e409018896cbacf42e2002056cc802d3b3afd729e074cda520ff0d460fa556\"" Jul 12 00:13:27.155151 systemd[1]: Started cri-containerd-e7e409018896cbacf42e2002056cc802d3b3afd729e074cda520ff0d460fa556.scope - libcontainer container e7e409018896cbacf42e2002056cc802d3b3afd729e074cda520ff0d460fa556. Jul 12 00:13:27.185403 containerd[1485]: time="2025-07-12T00:13:27.185318255Z" level=info msg="StartContainer for \"e7e409018896cbacf42e2002056cc802d3b3afd729e074cda520ff0d460fa556\" returns successfully" Jul 12 00:13:31.165134 kubelet[2601]: I0712 00:13:31.164012 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-6mbrg" podStartSLOduration=5.111934376 podStartE2EDuration="7.163989018s" podCreationTimestamp="2025-07-12 00:13:24 +0000 UTC" firstStartedPulling="2025-07-12 00:13:25.047669046 +0000 UTC m=+6.624662150" lastFinishedPulling="2025-07-12 00:13:27.099723648 +0000 UTC m=+8.676716792" observedRunningTime="2025-07-12 00:13:27.667373829 +0000 UTC m=+9.244366933" watchObservedRunningTime="2025-07-12 00:13:31.163989018 +0000 UTC m=+12.740982122" Jul 12 00:13:33.342774 sudo[1763]: pam_unix(sudo:session): session closed for user root Jul 12 00:13:33.507095 sshd[1760]: pam_unix(sshd:session): session closed for user core Jul 12 00:13:33.514554 systemd[1]: sshd@6-91.99.220.16:22-139.178.68.195:54660.service: Deactivated successfully. Jul 12 00:13:33.518321 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:13:33.519724 systemd[1]: session-7.scope: Consumed 8.133s CPU time, 147.4M memory peak, 0B memory swap peak. Jul 12 00:13:33.523741 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:13:33.527779 systemd-logind[1460]: Removed session 7. Jul 12 00:13:41.465513 systemd[1]: Created slice kubepods-besteffort-pod5f75bafa_e5c1_44e1_84d8_6f08b3a92b0c.slice - libcontainer container kubepods-besteffort-pod5f75bafa_e5c1_44e1_84d8_6f08b3a92b0c.slice. Jul 12 00:13:41.583867 kubelet[2601]: I0712 00:13:41.583740 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blsp5\" (UniqueName: \"kubernetes.io/projected/5f75bafa-e5c1-44e1-84d8-6f08b3a92b0c-kube-api-access-blsp5\") pod \"calico-typha-67c84bf66f-wfp86\" (UID: \"5f75bafa-e5c1-44e1-84d8-6f08b3a92b0c\") " pod="calico-system/calico-typha-67c84bf66f-wfp86" Jul 12 00:13:41.583867 kubelet[2601]: I0712 00:13:41.583809 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f75bafa-e5c1-44e1-84d8-6f08b3a92b0c-tigera-ca-bundle\") pod \"calico-typha-67c84bf66f-wfp86\" (UID: \"5f75bafa-e5c1-44e1-84d8-6f08b3a92b0c\") " pod="calico-system/calico-typha-67c84bf66f-wfp86" Jul 12 00:13:41.583867 kubelet[2601]: I0712 00:13:41.583831 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5f75bafa-e5c1-44e1-84d8-6f08b3a92b0c-typha-certs\") pod \"calico-typha-67c84bf66f-wfp86\" (UID: \"5f75bafa-e5c1-44e1-84d8-6f08b3a92b0c\") " pod="calico-system/calico-typha-67c84bf66f-wfp86" Jul 12 00:13:41.647596 systemd[1]: Created slice kubepods-besteffort-podc1b9b272_7f1d_42c0_8950_5900ed9c4def.slice - libcontainer container kubepods-besteffort-podc1b9b272_7f1d_42c0_8950_5900ed9c4def.slice. Jul 12 00:13:41.776482 containerd[1485]: time="2025-07-12T00:13:41.775239721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67c84bf66f-wfp86,Uid:5f75bafa-e5c1-44e1-84d8-6f08b3a92b0c,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:41.778483 kubelet[2601]: E0712 00:13:41.778430 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twcjp" podUID="c9d45977-08ad-4a73-90ca-4efb866e9fdb" Jul 12 00:13:41.785118 kubelet[2601]: I0712 00:13:41.785077 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-cni-log-dir\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.785118 kubelet[2601]: I0712 00:13:41.785120 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c1b9b272-7f1d-42c0-8950-5900ed9c4def-node-certs\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.785281 kubelet[2601]: I0712 00:13:41.785137 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c9d45977-08ad-4a73-90ca-4efb866e9fdb-registration-dir\") pod \"csi-node-driver-twcjp\" (UID: \"c9d45977-08ad-4a73-90ca-4efb866e9fdb\") " pod="calico-system/csi-node-driver-twcjp" Jul 12 00:13:41.785281 kubelet[2601]: I0712 00:13:41.785153 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c9d45977-08ad-4a73-90ca-4efb866e9fdb-varrun\") pod \"csi-node-driver-twcjp\" (UID: \"c9d45977-08ad-4a73-90ca-4efb866e9fdb\") " pod="calico-system/csi-node-driver-twcjp" Jul 12 00:13:41.785281 kubelet[2601]: I0712 00:13:41.785170 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-var-run-calico\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.785355 kubelet[2601]: I0712 00:13:41.785281 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-cni-net-dir\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.785355 kubelet[2601]: I0712 00:13:41.785301 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1b9b272-7f1d-42c0-8950-5900ed9c4def-tigera-ca-bundle\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.785355 kubelet[2601]: I0712 00:13:41.785317 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9d45977-08ad-4a73-90ca-4efb866e9fdb-kubelet-dir\") pod \"csi-node-driver-twcjp\" (UID: \"c9d45977-08ad-4a73-90ca-4efb866e9fdb\") " pod="calico-system/csi-node-driver-twcjp" Jul 12 00:13:41.785355 kubelet[2601]: I0712 00:13:41.785348 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-lib-modules\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.785480 kubelet[2601]: I0712 00:13:41.785365 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-policysync\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.785480 kubelet[2601]: I0712 00:13:41.785430 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gh6\" (UniqueName: \"kubernetes.io/projected/c9d45977-08ad-4a73-90ca-4efb866e9fdb-kube-api-access-v6gh6\") pod \"csi-node-driver-twcjp\" (UID: \"c9d45977-08ad-4a73-90ca-4efb866e9fdb\") " pod="calico-system/csi-node-driver-twcjp" Jul 12 00:13:41.785480 kubelet[2601]: I0712 00:13:41.785447 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-cni-bin-dir\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.785480 kubelet[2601]: I0712 00:13:41.785461 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c9d45977-08ad-4a73-90ca-4efb866e9fdb-socket-dir\") pod \"csi-node-driver-twcjp\" (UID: \"c9d45977-08ad-4a73-90ca-4efb866e9fdb\") " pod="calico-system/csi-node-driver-twcjp" Jul 12 00:13:41.785562 kubelet[2601]: I0712 00:13:41.785487 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-flexvol-driver-host\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.786089 kubelet[2601]: I0712 00:13:41.785990 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-var-lib-calico\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.786089 kubelet[2601]: I0712 00:13:41.786080 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1b9b272-7f1d-42c0-8950-5900ed9c4def-xtables-lock\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.787107 kubelet[2601]: I0712 00:13:41.786098 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9g6j\" (UniqueName: \"kubernetes.io/projected/c1b9b272-7f1d-42c0-8950-5900ed9c4def-kube-api-access-h9g6j\") pod \"calico-node-ptccc\" (UID: \"c1b9b272-7f1d-42c0-8950-5900ed9c4def\") " pod="calico-system/calico-node-ptccc" Jul 12 00:13:41.826485 containerd[1485]: time="2025-07-12T00:13:41.826362360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:41.826622 containerd[1485]: time="2025-07-12T00:13:41.826574687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:41.826653 containerd[1485]: time="2025-07-12T00:13:41.826613448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:41.826991 containerd[1485]: time="2025-07-12T00:13:41.826889417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:41.860093 systemd[1]: Started cri-containerd-26bdead1b23e60a04d42e83a9ed3570026c2e6b361f83fca0b75f6d57a954190.scope - libcontainer container 26bdead1b23e60a04d42e83a9ed3570026c2e6b361f83fca0b75f6d57a954190. Jul 12 00:13:41.909824 kubelet[2601]: E0712 00:13:41.907908 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:13:41.909824 kubelet[2601]: W0712 00:13:41.907934 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:13:41.909824 kubelet[2601]: E0712 00:13:41.907955 2601 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:13:41.919098 kubelet[2601]: E0712 00:13:41.918984 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:13:41.920304 kubelet[2601]: W0712 00:13:41.920272 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:13:41.920459 kubelet[2601]: E0712 00:13:41.920441 2601 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:13:41.953853 containerd[1485]: time="2025-07-12T00:13:41.953811489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ptccc,Uid:c1b9b272-7f1d-42c0-8950-5900ed9c4def,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:41.954165 kubelet[2601]: E0712 00:13:41.954066 2601 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 12 00:13:41.954165 kubelet[2601]: W0712 00:13:41.954122 2601 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 12 00:13:41.954165 kubelet[2601]: E0712 00:13:41.954143 2601 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 12 00:13:41.991306 containerd[1485]: time="2025-07-12T00:13:41.990309483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:41.991540 containerd[1485]: time="2025-07-12T00:13:41.991274232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:41.991540 containerd[1485]: time="2025-07-12T00:13:41.991287753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:41.991540 containerd[1485]: time="2025-07-12T00:13:41.991392716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:42.018993 systemd[1]: Started cri-containerd-eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a.scope - libcontainer container eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a. Jul 12 00:13:42.083339 containerd[1485]: time="2025-07-12T00:13:42.083235946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ptccc,Uid:c1b9b272-7f1d-42c0-8950-5900ed9c4def,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a\"" Jul 12 00:13:42.089260 containerd[1485]: time="2025-07-12T00:13:42.089206604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 12 00:13:42.106500 containerd[1485]: time="2025-07-12T00:13:42.106164311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67c84bf66f-wfp86,Uid:5f75bafa-e5c1-44e1-84d8-6f08b3a92b0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"26bdead1b23e60a04d42e83a9ed3570026c2e6b361f83fca0b75f6d57a954190\"" Jul 12 00:13:43.537355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650001585.mount: Deactivated successfully. Jul 12 00:13:43.574657 kubelet[2601]: E0712 00:13:43.573727 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twcjp" podUID="c9d45977-08ad-4a73-90ca-4efb866e9fdb" Jul 12 00:13:43.673986 containerd[1485]: time="2025-07-12T00:13:43.673858107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:43.675731 containerd[1485]: time="2025-07-12T00:13:43.674143676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 12 00:13:43.677068 containerd[1485]: time="2025-07-12T00:13:43.676972119Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:43.680471 containerd[1485]: time="2025-07-12T00:13:43.680403379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:43.681960 containerd[1485]: time="2025-07-12T00:13:43.681901503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.592642098s" Jul 12 00:13:43.681960 containerd[1485]: time="2025-07-12T00:13:43.681946984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 12 00:13:43.683402 containerd[1485]: time="2025-07-12T00:13:43.683359266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 12 00:13:43.688592 containerd[1485]: time="2025-07-12T00:13:43.688375693Z" level=info msg="CreateContainer within sandbox \"eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 12 00:13:43.707097 containerd[1485]: time="2025-07-12T00:13:43.706985838Z" level=info msg="CreateContainer within sandbox \"eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc\"" Jul 12 00:13:43.709063 containerd[1485]: time="2025-07-12T00:13:43.709007857Z" level=info msg="StartContainer for \"1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc\"" Jul 12 00:13:43.745007 systemd[1]: Started cri-containerd-1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc.scope - libcontainer container 1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc. Jul 12 00:13:43.783917 containerd[1485]: time="2025-07-12T00:13:43.783768246Z" level=info msg="StartContainer for \"1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc\" returns successfully" Jul 12 00:13:43.803758 systemd[1]: cri-containerd-1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc.scope: Deactivated successfully. Jul 12 00:13:43.836953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc-rootfs.mount: Deactivated successfully. Jul 12 00:13:43.900020 containerd[1485]: time="2025-07-12T00:13:43.899927527Z" level=info msg="shim disconnected" id=1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc namespace=k8s.io Jul 12 00:13:43.900020 containerd[1485]: time="2025-07-12T00:13:43.900003769Z" level=warning msg="cleaning up after shim disconnected" id=1ca7436083ea4e78b153dc5c8832c3d725105f88929c40b26f05586134c8a6dc namespace=k8s.io Jul 12 00:13:43.900020 containerd[1485]: time="2025-07-12T00:13:43.900012770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:13:45.500635 containerd[1485]: time="2025-07-12T00:13:45.499260342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:45.501502 containerd[1485]: time="2025-07-12T00:13:45.501378562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 12 00:13:45.501502 containerd[1485]: time="2025-07-12T00:13:45.501476645Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:45.503851 containerd[1485]: time="2025-07-12T00:13:45.503815871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:45.504509 containerd[1485]: time="2025-07-12T00:13:45.504472329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.821075942s" Jul 12 00:13:45.504509 containerd[1485]: time="2025-07-12T00:13:45.504507170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 12 00:13:45.508043 containerd[1485]: time="2025-07-12T00:13:45.507999629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 12 00:13:45.523359 containerd[1485]: time="2025-07-12T00:13:45.523175177Z" level=info msg="CreateContainer within sandbox \"26bdead1b23e60a04d42e83a9ed3570026c2e6b361f83fca0b75f6d57a954190\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 12 00:13:45.543345 containerd[1485]: time="2025-07-12T00:13:45.543286024Z" level=info msg="CreateContainer within sandbox \"26bdead1b23e60a04d42e83a9ed3570026c2e6b361f83fca0b75f6d57a954190\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6ae8dac529a1cfd34a5da5a1dea1d1f5b6714939344ed920143d805ef14298cd\"" Jul 12 00:13:45.544507 containerd[1485]: time="2025-07-12T00:13:45.544459497Z" level=info msg="StartContainer for \"6ae8dac529a1cfd34a5da5a1dea1d1f5b6714939344ed920143d805ef14298cd\"" Jul 12 00:13:45.574595 kubelet[2601]: E0712 00:13:45.574541 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twcjp" podUID="c9d45977-08ad-4a73-90ca-4efb866e9fdb" Jul 12 00:13:45.584001 systemd[1]: Started cri-containerd-6ae8dac529a1cfd34a5da5a1dea1d1f5b6714939344ed920143d805ef14298cd.scope - libcontainer container 6ae8dac529a1cfd34a5da5a1dea1d1f5b6714939344ed920143d805ef14298cd. Jul 12 00:13:45.627200 containerd[1485]: time="2025-07-12T00:13:45.627138469Z" level=info msg="StartContainer for \"6ae8dac529a1cfd34a5da5a1dea1d1f5b6714939344ed920143d805ef14298cd\" returns successfully" Jul 12 00:13:46.694693 kubelet[2601]: I0712 00:13:46.694596 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:13:47.576043 kubelet[2601]: E0712 00:13:47.574541 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twcjp" podUID="c9d45977-08ad-4a73-90ca-4efb866e9fdb" Jul 12 00:13:47.831886 containerd[1485]: time="2025-07-12T00:13:47.830343244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:47.831886 containerd[1485]: time="2025-07-12T00:13:47.831703561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 12 00:13:47.832861 containerd[1485]: time="2025-07-12T00:13:47.832751950Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:47.835769 containerd[1485]: time="2025-07-12T00:13:47.835734991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:47.837305 containerd[1485]: time="2025-07-12T00:13:47.836742738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.328694908s" Jul 12 00:13:47.837305 containerd[1485]: time="2025-07-12T00:13:47.836782779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 12 00:13:47.840602 containerd[1485]: time="2025-07-12T00:13:47.840568123Z" level=info msg="CreateContainer within sandbox \"eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 12 00:13:47.861586 containerd[1485]: time="2025-07-12T00:13:47.861525894Z" level=info msg="CreateContainer within sandbox \"eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6\"" Jul 12 00:13:47.864321 containerd[1485]: time="2025-07-12T00:13:47.864252048Z" level=info msg="StartContainer for \"c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6\"" Jul 12 00:13:47.899431 systemd[1]: run-containerd-runc-k8s.io-c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6-runc.XXI8uQ.mount: Deactivated successfully. Jul 12 00:13:47.909872 systemd[1]: Started cri-containerd-c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6.scope - libcontainer container c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6. Jul 12 00:13:47.940597 containerd[1485]: time="2025-07-12T00:13:47.940221478Z" level=info msg="StartContainer for \"c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6\" returns successfully" Jul 12 00:13:48.446725 containerd[1485]: time="2025-07-12T00:13:48.446662368Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:13:48.448861 systemd[1]: cri-containerd-c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6.scope: Deactivated successfully. Jul 12 00:13:48.475381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6-rootfs.mount: Deactivated successfully. Jul 12 00:13:48.492327 kubelet[2601]: I0712 00:13:48.491890 2601 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:13:48.530273 kubelet[2601]: I0712 00:13:48.530188 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67c84bf66f-wfp86" podStartSLOduration=4.133312756 podStartE2EDuration="7.530169527s" podCreationTimestamp="2025-07-12 00:13:41 +0000 UTC" firstStartedPulling="2025-07-12 00:13:42.110349636 +0000 UTC m=+23.687342700" lastFinishedPulling="2025-07-12 00:13:45.507206367 +0000 UTC m=+27.084199471" observedRunningTime="2025-07-12 00:13:45.712509676 +0000 UTC m=+27.289502780" watchObservedRunningTime="2025-07-12 00:13:48.530169527 +0000 UTC m=+30.107162591" Jul 12 00:13:48.544785 systemd[1]: Created slice kubepods-burstable-podf208ae55_8f4f_4d14_a26a_564c62f2524f.slice - libcontainer container kubepods-burstable-podf208ae55_8f4f_4d14_a26a_564c62f2524f.slice. Jul 12 00:13:48.545190 kubelet[2601]: I0712 00:13:48.544886 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/263f7977-4a38-4140-adcf-d1a6d16328ea-calico-apiserver-certs\") pod \"calico-apiserver-5bcd7b9b6d-g2glf\" (UID: \"263f7977-4a38-4140-adcf-d1a6d16328ea\") " pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-g2glf" Jul 12 00:13:48.545190 kubelet[2601]: I0712 00:13:48.544919 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5-calico-apiserver-certs\") pod \"calico-apiserver-5bcd7b9b6d-97z8q\" (UID: \"e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5\") " pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-97z8q" Jul 12 00:13:48.545190 kubelet[2601]: I0712 00:13:48.544949 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzbk2\" (UniqueName: \"kubernetes.io/projected/263f7977-4a38-4140-adcf-d1a6d16328ea-kube-api-access-lzbk2\") pod \"calico-apiserver-5bcd7b9b6d-g2glf\" (UID: \"263f7977-4a38-4140-adcf-d1a6d16328ea\") " pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-g2glf" Jul 12 00:13:48.545190 kubelet[2601]: I0712 00:13:48.544966 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2llhj\" (UniqueName: \"kubernetes.io/projected/4d576838-2540-49b2-98fa-baaffb730d5f-kube-api-access-2llhj\") pod \"whisker-674cf9c754-qpdxf\" (UID: \"4d576838-2540-49b2-98fa-baaffb730d5f\") " pod="calico-system/whisker-674cf9c754-qpdxf" Jul 12 00:13:48.545190 kubelet[2601]: I0712 00:13:48.544984 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f208ae55-8f4f-4d14-a26a-564c62f2524f-config-volume\") pod \"coredns-7c65d6cfc9-rnjg4\" (UID: \"f208ae55-8f4f-4d14-a26a-564c62f2524f\") " pod="kube-system/coredns-7c65d6cfc9-rnjg4" Jul 12 00:13:48.545334 kubelet[2601]: I0712 00:13:48.545001 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bwxb\" (UniqueName: \"kubernetes.io/projected/f208ae55-8f4f-4d14-a26a-564c62f2524f-kube-api-access-6bwxb\") pod \"coredns-7c65d6cfc9-rnjg4\" (UID: \"f208ae55-8f4f-4d14-a26a-564c62f2524f\") " pod="kube-system/coredns-7c65d6cfc9-rnjg4" Jul 12 00:13:48.545334 kubelet[2601]: I0712 00:13:48.545018 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d576838-2540-49b2-98fa-baaffb730d5f-whisker-backend-key-pair\") pod \"whisker-674cf9c754-qpdxf\" (UID: \"4d576838-2540-49b2-98fa-baaffb730d5f\") " pod="calico-system/whisker-674cf9c754-qpdxf" Jul 12 00:13:48.545334 kubelet[2601]: I0712 00:13:48.545035 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9sxb\" (UniqueName: \"kubernetes.io/projected/e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5-kube-api-access-l9sxb\") pod \"calico-apiserver-5bcd7b9b6d-97z8q\" (UID: \"e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5\") " pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-97z8q" Jul 12 00:13:48.545334 kubelet[2601]: I0712 00:13:48.545051 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d576838-2540-49b2-98fa-baaffb730d5f-whisker-ca-bundle\") pod \"whisker-674cf9c754-qpdxf\" (UID: \"4d576838-2540-49b2-98fa-baaffb730d5f\") " pod="calico-system/whisker-674cf9c754-qpdxf" Jul 12 00:13:48.561973 systemd[1]: Created slice kubepods-besteffort-pod263f7977_4a38_4140_adcf_d1a6d16328ea.slice - libcontainer container kubepods-besteffort-pod263f7977_4a38_4140_adcf_d1a6d16328ea.slice. Jul 12 00:13:48.585646 systemd[1]: Created slice kubepods-besteffort-pod4d576838_2540_49b2_98fa_baaffb730d5f.slice - libcontainer container kubepods-besteffort-pod4d576838_2540_49b2_98fa_baaffb730d5f.slice. Jul 12 00:13:48.603151 containerd[1485]: time="2025-07-12T00:13:48.602914519Z" level=info msg="shim disconnected" id=c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6 namespace=k8s.io Jul 12 00:13:48.603151 containerd[1485]: time="2025-07-12T00:13:48.602984000Z" level=warning msg="cleaning up after shim disconnected" id=c577d11660eddce0be181bac75fc29c99cb143076f80d38ab89662a5191ab6a6 namespace=k8s.io Jul 12 00:13:48.603151 containerd[1485]: time="2025-07-12T00:13:48.602996561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:13:48.609558 systemd[1]: Created slice kubepods-besteffort-pode0cd4f15_360a_4b9e_86d5_922d2b6ce0b5.slice - libcontainer container kubepods-besteffort-pode0cd4f15_360a_4b9e_86d5_922d2b6ce0b5.slice. Jul 12 00:13:48.624766 systemd[1]: Created slice kubepods-besteffort-pod8dcaf788_c931_4d5f_8dbb_aa867fceaa4c.slice - libcontainer container kubepods-besteffort-pod8dcaf788_c931_4d5f_8dbb_aa867fceaa4c.slice. Jul 12 00:13:48.638671 systemd[1]: Created slice kubepods-besteffort-poda5d34381_7fee_470d_b68b_74c007d52fdd.slice - libcontainer container kubepods-besteffort-poda5d34381_7fee_470d_b68b_74c007d52fdd.slice. Jul 12 00:13:48.645417 kubelet[2601]: I0712 00:13:48.645343 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcaf788-c931-4d5f-8dbb-aa867fceaa4c-config\") pod \"goldmane-58fd7646b9-25c9q\" (UID: \"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c\") " pod="calico-system/goldmane-58fd7646b9-25c9q" Jul 12 00:13:48.645417 kubelet[2601]: I0712 00:13:48.645425 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dcaf788-c931-4d5f-8dbb-aa867fceaa4c-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-25c9q\" (UID: \"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c\") " pod="calico-system/goldmane-58fd7646b9-25c9q" Jul 12 00:13:48.645679 kubelet[2601]: I0712 00:13:48.645458 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd55186a-824e-478a-8f31-23c3f2558e58-config-volume\") pod \"coredns-7c65d6cfc9-pq9dw\" (UID: \"fd55186a-824e-478a-8f31-23c3f2558e58\") " pod="kube-system/coredns-7c65d6cfc9-pq9dw" Jul 12 00:13:48.645679 kubelet[2601]: I0712 00:13:48.645476 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5d34381-7fee-470d-b68b-74c007d52fdd-tigera-ca-bundle\") pod \"calico-kube-controllers-694bc47746-lklc8\" (UID: \"a5d34381-7fee-470d-b68b-74c007d52fdd\") " pod="calico-system/calico-kube-controllers-694bc47746-lklc8" Jul 12 00:13:48.645679 kubelet[2601]: I0712 00:13:48.645505 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trblc\" (UniqueName: \"kubernetes.io/projected/8dcaf788-c931-4d5f-8dbb-aa867fceaa4c-kube-api-access-trblc\") pod \"goldmane-58fd7646b9-25c9q\" (UID: \"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c\") " pod="calico-system/goldmane-58fd7646b9-25c9q" Jul 12 00:13:48.645679 kubelet[2601]: I0712 00:13:48.645540 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8nlx\" (UniqueName: \"kubernetes.io/projected/fd55186a-824e-478a-8f31-23c3f2558e58-kube-api-access-f8nlx\") pod \"coredns-7c65d6cfc9-pq9dw\" (UID: \"fd55186a-824e-478a-8f31-23c3f2558e58\") " pod="kube-system/coredns-7c65d6cfc9-pq9dw" Jul 12 00:13:48.645679 kubelet[2601]: I0712 00:13:48.645574 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8dcaf788-c931-4d5f-8dbb-aa867fceaa4c-goldmane-key-pair\") pod \"goldmane-58fd7646b9-25c9q\" (UID: \"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c\") " pod="calico-system/goldmane-58fd7646b9-25c9q" Jul 12 00:13:48.649020 kubelet[2601]: I0712 00:13:48.645591 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfqm\" (UniqueName: \"kubernetes.io/projected/a5d34381-7fee-470d-b68b-74c007d52fdd-kube-api-access-rkfqm\") pod \"calico-kube-controllers-694bc47746-lklc8\" (UID: \"a5d34381-7fee-470d-b68b-74c007d52fdd\") " pod="calico-system/calico-kube-controllers-694bc47746-lklc8" Jul 12 00:13:48.650654 systemd[1]: Created slice kubepods-burstable-podfd55186a_824e_478a_8f31_23c3f2558e58.slice - libcontainer container kubepods-burstable-podfd55186a_824e_478a_8f31_23c3f2558e58.slice. Jul 12 00:13:48.706642 containerd[1485]: time="2025-07-12T00:13:48.704734009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 12 00:13:48.857736 containerd[1485]: time="2025-07-12T00:13:48.857007093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rnjg4,Uid:f208ae55-8f4f-4d14-a26a-564c62f2524f,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:48.878718 containerd[1485]: time="2025-07-12T00:13:48.876885707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcd7b9b6d-g2glf,Uid:263f7977-4a38-4140-adcf-d1a6d16328ea,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:13:48.902847 containerd[1485]: time="2025-07-12T00:13:48.902806042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674cf9c754-qpdxf,Uid:4d576838-2540-49b2-98fa-baaffb730d5f,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:48.917437 containerd[1485]: time="2025-07-12T00:13:48.917171547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcd7b9b6d-97z8q,Uid:e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5,Namespace:calico-apiserver,Attempt:0,}" Jul 12 00:13:48.934102 containerd[1485]: time="2025-07-12T00:13:48.934058200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-25c9q,Uid:8dcaf788-c931-4d5f-8dbb-aa867fceaa4c,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:48.946081 containerd[1485]: time="2025-07-12T00:13:48.946032961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-694bc47746-lklc8,Uid:a5d34381-7fee-470d-b68b-74c007d52fdd,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:48.958997 containerd[1485]: time="2025-07-12T00:13:48.958885826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pq9dw,Uid:fd55186a-824e-478a-8f31-23c3f2558e58,Namespace:kube-system,Attempt:0,}" Jul 12 00:13:49.038132 containerd[1485]: time="2025-07-12T00:13:49.038066295Z" level=error msg="Failed to destroy network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.038737 containerd[1485]: time="2025-07-12T00:13:49.038700672Z" level=error msg="encountered an error cleaning up failed sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.038967 containerd[1485]: time="2025-07-12T00:13:49.038910997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rnjg4,Uid:f208ae55-8f4f-4d14-a26a-564c62f2524f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.040192 kubelet[2601]: E0712 00:13:49.039301 2601 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.040192 kubelet[2601]: E0712 00:13:49.039381 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rnjg4" Jul 12 00:13:49.040192 kubelet[2601]: E0712 00:13:49.039404 2601 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rnjg4" Jul 12 00:13:49.040334 kubelet[2601]: E0712 00:13:49.039446 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rnjg4_kube-system(f208ae55-8f4f-4d14-a26a-564c62f2524f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rnjg4_kube-system(f208ae55-8f4f-4d14-a26a-564c62f2524f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rnjg4" podUID="f208ae55-8f4f-4d14-a26a-564c62f2524f" Jul 12 00:13:49.052365 containerd[1485]: time="2025-07-12T00:13:49.052306111Z" level=error msg="Failed to destroy network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.052926 containerd[1485]: time="2025-07-12T00:13:49.052839125Z" level=error msg="encountered an error cleaning up failed sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.053165 containerd[1485]: time="2025-07-12T00:13:49.052902807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcd7b9b6d-g2glf,Uid:263f7977-4a38-4140-adcf-d1a6d16328ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.053569 kubelet[2601]: E0712 00:13:49.053387 2601 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.053569 kubelet[2601]: E0712 00:13:49.053449 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-g2glf" Jul 12 00:13:49.053569 kubelet[2601]: E0712 00:13:49.053468 2601 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-g2glf" Jul 12 00:13:49.053683 kubelet[2601]: E0712 00:13:49.053523 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bcd7b9b6d-g2glf_calico-apiserver(263f7977-4a38-4140-adcf-d1a6d16328ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bcd7b9b6d-g2glf_calico-apiserver(263f7977-4a38-4140-adcf-d1a6d16328ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-g2glf" podUID="263f7977-4a38-4140-adcf-d1a6d16328ea" Jul 12 00:13:49.089100 containerd[1485]: time="2025-07-12T00:13:49.089030721Z" level=error msg="Failed to destroy network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.089455 containerd[1485]: time="2025-07-12T00:13:49.089420972Z" level=error msg="encountered an error cleaning up failed sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.089541 containerd[1485]: time="2025-07-12T00:13:49.089477693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcd7b9b6d-97z8q,Uid:e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.089735 kubelet[2601]: E0712 00:13:49.089695 2601 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.089816 kubelet[2601]: E0712 00:13:49.089757 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-97z8q" Jul 12 00:13:49.089816 kubelet[2601]: E0712 00:13:49.089778 2601 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-97z8q" Jul 12 00:13:49.089942 kubelet[2601]: E0712 00:13:49.089850 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bcd7b9b6d-97z8q_calico-apiserver(e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bcd7b9b6d-97z8q_calico-apiserver(e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-97z8q" podUID="e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5" Jul 12 00:13:49.101407 containerd[1485]: time="2025-07-12T00:13:49.100949036Z" level=error msg="Failed to destroy network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.103537 containerd[1485]: time="2025-07-12T00:13:49.103449062Z" level=error msg="encountered an error cleaning up failed sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.104055 containerd[1485]: time="2025-07-12T00:13:49.103779191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674cf9c754-qpdxf,Uid:4d576838-2540-49b2-98fa-baaffb730d5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.104439 kubelet[2601]: E0712 00:13:49.104395 2601 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.104518 kubelet[2601]: E0712 00:13:49.104458 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674cf9c754-qpdxf" Jul 12 00:13:49.104518 kubelet[2601]: E0712 00:13:49.104478 2601 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674cf9c754-qpdxf" Jul 12 00:13:49.104570 kubelet[2601]: E0712 00:13:49.104529 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-674cf9c754-qpdxf_calico-system(4d576838-2540-49b2-98fa-baaffb730d5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-674cf9c754-qpdxf_calico-system(4d576838-2540-49b2-98fa-baaffb730d5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-674cf9c754-qpdxf" podUID="4d576838-2540-49b2-98fa-baaffb730d5f" Jul 12 00:13:49.146804 containerd[1485]: time="2025-07-12T00:13:49.146742326Z" level=error msg="Failed to destroy network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.147278 containerd[1485]: time="2025-07-12T00:13:49.147156257Z" level=error msg="encountered an error cleaning up failed sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.147278 containerd[1485]: time="2025-07-12T00:13:49.147227059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pq9dw,Uid:fd55186a-824e-478a-8f31-23c3f2558e58,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.147519 kubelet[2601]: E0712 00:13:49.147445 2601 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.147519 kubelet[2601]: E0712 00:13:49.147510 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pq9dw" Jul 12 00:13:49.147725 kubelet[2601]: E0712 00:13:49.147528 2601 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pq9dw" Jul 12 00:13:49.147725 kubelet[2601]: E0712 00:13:49.147570 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pq9dw_kube-system(fd55186a-824e-478a-8f31-23c3f2558e58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pq9dw_kube-system(fd55186a-824e-478a-8f31-23c3f2558e58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pq9dw" podUID="fd55186a-824e-478a-8f31-23c3f2558e58" Jul 12 00:13:49.154227 containerd[1485]: time="2025-07-12T00:13:49.154067359Z" level=error msg="Failed to destroy network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.154733 containerd[1485]: time="2025-07-12T00:13:49.154658215Z" level=error msg="encountered an error cleaning up failed sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.154944 containerd[1485]: time="2025-07-12T00:13:49.154845220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-25c9q,Uid:8dcaf788-c931-4d5f-8dbb-aa867fceaa4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.155947 kubelet[2601]: E0712 00:13:49.155809 2601 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.155947 kubelet[2601]: E0712 00:13:49.155890 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-25c9q" Jul 12 00:13:49.155947 kubelet[2601]: E0712 00:13:49.155909 2601 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-25c9q" Jul 12 00:13:49.157052 kubelet[2601]: E0712 00:13:49.156311 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-25c9q_calico-system(8dcaf788-c931-4d5f-8dbb-aa867fceaa4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-25c9q_calico-system(8dcaf788-c931-4d5f-8dbb-aa867fceaa4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-25c9q" podUID="8dcaf788-c931-4d5f-8dbb-aa867fceaa4c" Jul 12 00:13:49.158258 containerd[1485]: time="2025-07-12T00:13:49.158226869Z" level=error msg="Failed to destroy network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.158653 containerd[1485]: time="2025-07-12T00:13:49.158624560Z" level=error msg="encountered an error cleaning up failed sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.158920 containerd[1485]: time="2025-07-12T00:13:49.158746403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-694bc47746-lklc8,Uid:a5d34381-7fee-470d-b68b-74c007d52fdd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.159377 kubelet[2601]: E0712 00:13:49.159334 2601 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.159552 kubelet[2601]: E0712 00:13:49.159389 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-694bc47746-lklc8" Jul 12 00:13:49.159552 kubelet[2601]: E0712 00:13:49.159424 2601 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-694bc47746-lklc8" Jul 12 00:13:49.159552 kubelet[2601]: E0712 00:13:49.159464 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-694bc47746-lklc8_calico-system(a5d34381-7fee-470d-b68b-74c007d52fdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-694bc47746-lklc8_calico-system(a5d34381-7fee-470d-b68b-74c007d52fdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-694bc47746-lklc8" podUID="a5d34381-7fee-470d-b68b-74c007d52fdd" Jul 12 00:13:49.582998 systemd[1]: Created slice kubepods-besteffort-podc9d45977_08ad_4a73_90ca_4efb866e9fdb.slice - libcontainer container kubepods-besteffort-podc9d45977_08ad_4a73_90ca_4efb866e9fdb.slice. Jul 12 00:13:49.586775 containerd[1485]: time="2025-07-12T00:13:49.586698868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twcjp,Uid:c9d45977-08ad-4a73-90ca-4efb866e9fdb,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:49.648693 containerd[1485]: time="2025-07-12T00:13:49.648628944Z" level=error msg="Failed to destroy network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.649498 containerd[1485]: time="2025-07-12T00:13:49.649238200Z" level=error msg="encountered an error cleaning up failed sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.649498 containerd[1485]: time="2025-07-12T00:13:49.649306722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twcjp,Uid:c9d45977-08ad-4a73-90ca-4efb866e9fdb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.651819 kubelet[2601]: E0712 00:13:49.649827 2601 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.651819 kubelet[2601]: E0712 00:13:49.649905 2601 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-twcjp" Jul 12 00:13:49.651819 kubelet[2601]: E0712 00:13:49.649930 2601 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-twcjp" Jul 12 00:13:49.652248 kubelet[2601]: E0712 00:13:49.649986 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-twcjp_calico-system(c9d45977-08ad-4a73-90ca-4efb866e9fdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-twcjp_calico-system(c9d45977-08ad-4a73-90ca-4efb866e9fdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-twcjp" podUID="c9d45977-08ad-4a73-90ca-4efb866e9fdb" Jul 12 00:13:49.709426 kubelet[2601]: I0712 00:13:49.709032 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:13:49.713085 kubelet[2601]: I0712 00:13:49.712639 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:13:49.713237 containerd[1485]: time="2025-07-12T00:13:49.712708477Z" level=info msg="StopPodSandbox for \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\"" Jul 12 00:13:49.713237 containerd[1485]: time="2025-07-12T00:13:49.712919922Z" level=info msg="Ensure that sandbox 75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3 in task-service has been cleanup successfully" Jul 12 00:13:49.714436 containerd[1485]: time="2025-07-12T00:13:49.713567219Z" level=info msg="StopPodSandbox for \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\"" Jul 12 00:13:49.714436 containerd[1485]: time="2025-07-12T00:13:49.713705823Z" level=info msg="Ensure that sandbox aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63 in task-service has been cleanup successfully" Jul 12 00:13:49.716761 kubelet[2601]: I0712 00:13:49.716523 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:13:49.720236 containerd[1485]: time="2025-07-12T00:13:49.719728582Z" level=info msg="StopPodSandbox for \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\"" Jul 12 00:13:49.720236 containerd[1485]: time="2025-07-12T00:13:49.719974229Z" level=info msg="Ensure that sandbox 626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f in task-service has been cleanup successfully" Jul 12 00:13:49.723626 kubelet[2601]: I0712 00:13:49.723606 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:13:49.725725 containerd[1485]: time="2025-07-12T00:13:49.725687420Z" level=info msg="StopPodSandbox for \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\"" Jul 12 00:13:49.726215 containerd[1485]: time="2025-07-12T00:13:49.726189953Z" level=info msg="Ensure that sandbox b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207 in task-service has been cleanup successfully" Jul 12 00:13:49.729268 kubelet[2601]: I0712 00:13:49.729231 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:13:49.733022 containerd[1485]: time="2025-07-12T00:13:49.732985572Z" level=info msg="StopPodSandbox for \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\"" Jul 12 00:13:49.733550 containerd[1485]: time="2025-07-12T00:13:49.733290540Z" level=info msg="Ensure that sandbox 72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea in task-service has been cleanup successfully" Jul 12 00:13:49.739674 kubelet[2601]: I0712 00:13:49.739046 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:13:49.742051 containerd[1485]: time="2025-07-12T00:13:49.741999970Z" level=info msg="StopPodSandbox for \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\"" Jul 12 00:13:49.744689 containerd[1485]: time="2025-07-12T00:13:49.744629560Z" level=info msg="Ensure that sandbox fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a in task-service has been cleanup successfully" Jul 12 00:13:49.751339 kubelet[2601]: I0712 00:13:49.751294 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:13:49.753750 containerd[1485]: time="2025-07-12T00:13:49.753451193Z" level=info msg="StopPodSandbox for \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\"" Jul 12 00:13:49.758181 containerd[1485]: time="2025-07-12T00:13:49.757059048Z" level=info msg="Ensure that sandbox 233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61 in task-service has been cleanup successfully" Jul 12 00:13:49.758449 kubelet[2601]: I0712 00:13:49.758327 2601 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:13:49.760259 containerd[1485]: time="2025-07-12T00:13:49.760219692Z" level=info msg="StopPodSandbox for \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\"" Jul 12 00:13:49.760399 containerd[1485]: time="2025-07-12T00:13:49.760375256Z" level=info msg="Ensure that sandbox 75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7 in task-service has been cleanup successfully" Jul 12 00:13:49.818416 containerd[1485]: time="2025-07-12T00:13:49.818365628Z" level=error msg="StopPodSandbox for \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\" failed" error="failed to destroy network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.818836 kubelet[2601]: E0712 00:13:49.818769 2601 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:13:49.819082 kubelet[2601]: E0712 00:13:49.818960 2601 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3"} Jul 12 00:13:49.819082 kubelet[2601]: E0712 00:13:49.819033 2601 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d576838-2540-49b2-98fa-baaffb730d5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:49.819229 kubelet[2601]: E0712 00:13:49.819062 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d576838-2540-49b2-98fa-baaffb730d5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-674cf9c754-qpdxf" podUID="4d576838-2540-49b2-98fa-baaffb730d5f" Jul 12 00:13:49.827010 containerd[1485]: time="2025-07-12T00:13:49.826878773Z" level=error msg="StopPodSandbox for \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\" failed" error="failed to destroy network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.828729 kubelet[2601]: E0712 00:13:49.827299 2601 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:13:49.828729 kubelet[2601]: E0712 00:13:49.827348 2601 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f"} Jul 12 00:13:49.828729 kubelet[2601]: E0712 00:13:49.827381 2601 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:49.828729 kubelet[2601]: E0712 00:13:49.827403 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-25c9q" podUID="8dcaf788-c931-4d5f-8dbb-aa867fceaa4c" Jul 12 00:13:49.832749 containerd[1485]: time="2025-07-12T00:13:49.832706527Z" level=error msg="StopPodSandbox for \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\" failed" error="failed to destroy network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.833280 kubelet[2601]: E0712 00:13:49.833106 2601 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:13:49.833280 kubelet[2601]: E0712 00:13:49.833173 2601 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea"} Jul 12 00:13:49.833280 kubelet[2601]: E0712 00:13:49.833206 2601 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5d34381-7fee-470d-b68b-74c007d52fdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:49.833280 kubelet[2601]: E0712 00:13:49.833228 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5d34381-7fee-470d-b68b-74c007d52fdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-694bc47746-lklc8" podUID="a5d34381-7fee-470d-b68b-74c007d52fdd" Jul 12 00:13:49.857331 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3-shm.mount: Deactivated successfully. Jul 12 00:13:49.857440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207-shm.mount: Deactivated successfully. Jul 12 00:13:49.857497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63-shm.mount: Deactivated successfully. Jul 12 00:13:49.862170 containerd[1485]: time="2025-07-12T00:13:49.861633971Z" level=error msg="StopPodSandbox for \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\" failed" error="failed to destroy network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.867126 kubelet[2601]: E0712 00:13:49.867073 2601 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:13:49.869610 kubelet[2601]: E0712 00:13:49.867854 2601 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207"} Jul 12 00:13:49.869610 kubelet[2601]: E0712 00:13:49.867900 2601 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"263f7977-4a38-4140-adcf-d1a6d16328ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:49.869610 kubelet[2601]: E0712 00:13:49.867927 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"263f7977-4a38-4140-adcf-d1a6d16328ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-g2glf" podUID="263f7977-4a38-4140-adcf-d1a6d16328ea" Jul 12 00:13:49.873351 containerd[1485]: time="2025-07-12T00:13:49.873242517Z" level=error msg="StopPodSandbox for \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\" failed" error="failed to destroy network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.875860 kubelet[2601]: E0712 00:13:49.875817 2601 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:13:49.876034 kubelet[2601]: E0712 00:13:49.876013 2601 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63"} Jul 12 00:13:49.876122 kubelet[2601]: E0712 00:13:49.876108 2601 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f208ae55-8f4f-4d14-a26a-564c62f2524f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:49.876300 kubelet[2601]: E0712 00:13:49.876261 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f208ae55-8f4f-4d14-a26a-564c62f2524f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rnjg4" podUID="f208ae55-8f4f-4d14-a26a-564c62f2524f" Jul 12 00:13:49.879736 containerd[1485]: time="2025-07-12T00:13:49.879673567Z" level=error msg="StopPodSandbox for \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\" failed" error="failed to destroy network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.880106 kubelet[2601]: E0712 00:13:49.879972 2601 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:13:49.880106 kubelet[2601]: E0712 00:13:49.880025 2601 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a"} Jul 12 00:13:49.880106 kubelet[2601]: E0712 00:13:49.880055 2601 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:49.880106 kubelet[2601]: E0712 00:13:49.880076 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-97z8q" podUID="e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5" Jul 12 00:13:49.881542 containerd[1485]: time="2025-07-12T00:13:49.881472415Z" level=error msg="StopPodSandbox for \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\" failed" error="failed to destroy network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.881907 kubelet[2601]: E0712 00:13:49.881774 2601 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:13:49.881907 kubelet[2601]: E0712 00:13:49.881834 2601 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61"} Jul 12 00:13:49.881907 kubelet[2601]: E0712 00:13:49.881860 2601 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9d45977-08ad-4a73-90ca-4efb866e9fdb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:49.881907 kubelet[2601]: E0712 00:13:49.881879 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9d45977-08ad-4a73-90ca-4efb866e9fdb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-twcjp" podUID="c9d45977-08ad-4a73-90ca-4efb866e9fdb" Jul 12 00:13:49.885635 containerd[1485]: time="2025-07-12T00:13:49.885538562Z" level=error msg="StopPodSandbox for \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\" failed" error="failed to destroy network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 12 00:13:49.885754 kubelet[2601]: E0712 00:13:49.885721 2601 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:13:49.885858 kubelet[2601]: E0712 00:13:49.885761 2601 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7"} Jul 12 00:13:49.885858 kubelet[2601]: E0712 00:13:49.885788 2601 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd55186a-824e-478a-8f31-23c3f2558e58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 12 00:13:49.885858 kubelet[2601]: E0712 00:13:49.885839 2601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd55186a-824e-478a-8f31-23c3f2558e58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pq9dw" podUID="fd55186a-824e-478a-8f31-23c3f2558e58" Jul 12 00:13:50.813749 kubelet[2601]: I0712 00:13:50.812914 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:13:53.430840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363197042.mount: Deactivated successfully. Jul 12 00:13:53.463462 containerd[1485]: time="2025-07-12T00:13:53.463391262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:53.464774 containerd[1485]: time="2025-07-12T00:13:53.464714975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 12 00:13:53.465684 containerd[1485]: time="2025-07-12T00:13:53.465615838Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:53.468193 containerd[1485]: time="2025-07-12T00:13:53.468108540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:53.469851 containerd[1485]: time="2025-07-12T00:13:53.468913921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.763819581s" Jul 12 00:13:53.469851 containerd[1485]: time="2025-07-12T00:13:53.468961922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 12 00:13:53.486917 containerd[1485]: time="2025-07-12T00:13:53.486867450Z" level=info msg="CreateContainer within sandbox \"eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 12 00:13:53.516542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416477080.mount: Deactivated successfully. Jul 12 00:13:53.520067 containerd[1485]: time="2025-07-12T00:13:53.520001960Z" level=info msg="CreateContainer within sandbox \"eb5c8adb631abfa5475e69797a8ed22b3ac0d11c2c30f9ac7194e235d106992a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66144bb272bd40a8781410ed024cfddaa4154d82d141e0401df57bf5913161c3\"" Jul 12 00:13:53.520975 containerd[1485]: time="2025-07-12T00:13:53.520937703Z" level=info msg="StartContainer for \"66144bb272bd40a8781410ed024cfddaa4154d82d141e0401df57bf5913161c3\"" Jul 12 00:13:53.571990 systemd[1]: Started cri-containerd-66144bb272bd40a8781410ed024cfddaa4154d82d141e0401df57bf5913161c3.scope - libcontainer container 66144bb272bd40a8781410ed024cfddaa4154d82d141e0401df57bf5913161c3. Jul 12 00:13:53.606247 containerd[1485]: time="2025-07-12T00:13:53.606143557Z" level=info msg="StartContainer for \"66144bb272bd40a8781410ed024cfddaa4154d82d141e0401df57bf5913161c3\" returns successfully" Jul 12 00:13:53.759456 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 12 00:13:53.759593 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 12 00:13:53.802357 kubelet[2601]: I0712 00:13:53.802079 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ptccc" podStartSLOduration=1.418088184 podStartE2EDuration="12.802057302s" podCreationTimestamp="2025-07-12 00:13:41 +0000 UTC" firstStartedPulling="2025-07-12 00:13:42.086164313 +0000 UTC m=+23.663157417" lastFinishedPulling="2025-07-12 00:13:53.470133431 +0000 UTC m=+35.047126535" observedRunningTime="2025-07-12 00:13:53.796018311 +0000 UTC m=+35.373011415" watchObservedRunningTime="2025-07-12 00:13:53.802057302 +0000 UTC m=+35.379050406" Jul 12 00:13:53.920970 containerd[1485]: time="2025-07-12T00:13:53.920913838Z" level=info msg="StopPodSandbox for \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\"" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.010 [INFO][3686] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.011 [INFO][3686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" iface="eth0" netns="/var/run/netns/cni-3a19be43-a9b0-8d58-37d3-6a8dfc9070e1" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.011 [INFO][3686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" iface="eth0" netns="/var/run/netns/cni-3a19be43-a9b0-8d58-37d3-6a8dfc9070e1" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.011 [INFO][3686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" iface="eth0" netns="/var/run/netns/cni-3a19be43-a9b0-8d58-37d3-6a8dfc9070e1" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.011 [INFO][3686] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.011 [INFO][3686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.068 [INFO][3693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.069 [INFO][3693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.069 [INFO][3693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.086 [WARNING][3693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.086 [INFO][3693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.090 [INFO][3693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:54.098329 containerd[1485]: 2025-07-12 00:13:54.093 [INFO][3686] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:13:54.100214 containerd[1485]: time="2025-07-12T00:13:54.098560417Z" level=info msg="TearDown network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\" successfully" Jul 12 00:13:54.100214 containerd[1485]: time="2025-07-12T00:13:54.098593818Z" level=info msg="StopPodSandbox for \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\" returns successfully" Jul 12 00:13:54.198892 kubelet[2601]: I0712 00:13:54.198836 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d576838-2540-49b2-98fa-baaffb730d5f-whisker-backend-key-pair\") pod \"4d576838-2540-49b2-98fa-baaffb730d5f\" (UID: \"4d576838-2540-49b2-98fa-baaffb730d5f\") " Jul 12 00:13:54.198892 kubelet[2601]: I0712 00:13:54.198896 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d576838-2540-49b2-98fa-baaffb730d5f-whisker-ca-bundle\") pod \"4d576838-2540-49b2-98fa-baaffb730d5f\" (UID: \"4d576838-2540-49b2-98fa-baaffb730d5f\") " Jul 12 00:13:54.199121 kubelet[2601]: I0712 00:13:54.198933 2601 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2llhj\" (UniqueName: \"kubernetes.io/projected/4d576838-2540-49b2-98fa-baaffb730d5f-kube-api-access-2llhj\") pod \"4d576838-2540-49b2-98fa-baaffb730d5f\" (UID: \"4d576838-2540-49b2-98fa-baaffb730d5f\") " Jul 12 00:13:54.202390 kubelet[2601]: I0712 00:13:54.202173 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d576838-2540-49b2-98fa-baaffb730d5f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4d576838-2540-49b2-98fa-baaffb730d5f" (UID: "4d576838-2540-49b2-98fa-baaffb730d5f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:13:54.206681 kubelet[2601]: I0712 00:13:54.206440 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d576838-2540-49b2-98fa-baaffb730d5f-kube-api-access-2llhj" (OuterVolumeSpecName: "kube-api-access-2llhj") pod "4d576838-2540-49b2-98fa-baaffb730d5f" (UID: "4d576838-2540-49b2-98fa-baaffb730d5f"). InnerVolumeSpecName "kube-api-access-2llhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:13:54.207191 kubelet[2601]: I0712 00:13:54.206638 2601 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d576838-2540-49b2-98fa-baaffb730d5f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4d576838-2540-49b2-98fa-baaffb730d5f" (UID: "4d576838-2540-49b2-98fa-baaffb730d5f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:13:54.300141 kubelet[2601]: I0712 00:13:54.300071 2601 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2llhj\" (UniqueName: \"kubernetes.io/projected/4d576838-2540-49b2-98fa-baaffb730d5f-kube-api-access-2llhj\") on node \"ci-4081-3-4-n-8926aa35a3\" DevicePath \"\"" Jul 12 00:13:54.300141 kubelet[2601]: I0712 00:13:54.300133 2601 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d576838-2540-49b2-98fa-baaffb730d5f-whisker-backend-key-pair\") on node \"ci-4081-3-4-n-8926aa35a3\" DevicePath \"\"" Jul 12 00:13:54.300141 kubelet[2601]: I0712 00:13:54.300157 2601 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d576838-2540-49b2-98fa-baaffb730d5f-whisker-ca-bundle\") on node \"ci-4081-3-4-n-8926aa35a3\" DevicePath \"\"" Jul 12 00:13:54.432685 systemd[1]: run-netns-cni\x2d3a19be43\x2da9b0\x2d8d58\x2d37d3\x2d6a8dfc9070e1.mount: Deactivated successfully. Jul 12 00:13:54.433358 systemd[1]: var-lib-kubelet-pods-4d576838\x2d2540\x2d49b2\x2d98fa\x2dbaaffb730d5f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2llhj.mount: Deactivated successfully. Jul 12 00:13:54.433418 systemd[1]: var-lib-kubelet-pods-4d576838\x2d2540\x2d49b2\x2d98fa\x2dbaaffb730d5f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 12 00:13:54.596386 systemd[1]: Removed slice kubepods-besteffort-pod4d576838_2540_49b2_98fa_baaffb730d5f.slice - libcontainer container kubepods-besteffort-pod4d576838_2540_49b2_98fa_baaffb730d5f.slice. Jul 12 00:13:54.780276 kubelet[2601]: I0712 00:13:54.778372 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:13:54.861181 kubelet[2601]: W0712 00:13:54.861136 2601 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081-3-4-n-8926aa35a3" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-4-n-8926aa35a3' and this object Jul 12 00:13:54.863521 kubelet[2601]: E0712 00:13:54.863106 2601 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081-3-4-n-8926aa35a3\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-4-n-8926aa35a3' and this object" logger="UnhandledError" Jul 12 00:13:54.865067 systemd[1]: Created slice kubepods-besteffort-podd9f60756_caec_4fc1_b049_47a746ac0164.slice - libcontainer container kubepods-besteffort-podd9f60756_caec_4fc1_b049_47a746ac0164.slice. Jul 12 00:13:54.904322 kubelet[2601]: I0712 00:13:54.904215 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9f60756-caec-4fc1-b049-47a746ac0164-whisker-ca-bundle\") pod \"whisker-6468dbcb8b-68qt6\" (UID: \"d9f60756-caec-4fc1-b049-47a746ac0164\") " pod="calico-system/whisker-6468dbcb8b-68qt6" Jul 12 00:13:54.904322 kubelet[2601]: I0712 00:13:54.904315 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9f60756-caec-4fc1-b049-47a746ac0164-whisker-backend-key-pair\") pod \"whisker-6468dbcb8b-68qt6\" (UID: \"d9f60756-caec-4fc1-b049-47a746ac0164\") " pod="calico-system/whisker-6468dbcb8b-68qt6" Jul 12 00:13:54.904528 kubelet[2601]: I0712 00:13:54.904357 2601 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x47z\" (UniqueName: \"kubernetes.io/projected/d9f60756-caec-4fc1-b049-47a746ac0164-kube-api-access-7x47z\") pod \"whisker-6468dbcb8b-68qt6\" (UID: \"d9f60756-caec-4fc1-b049-47a746ac0164\") " pod="calico-system/whisker-6468dbcb8b-68qt6" Jul 12 00:13:55.737839 kernel: bpftool[3835]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 12 00:13:55.949198 systemd-networkd[1376]: vxlan.calico: Link UP Jul 12 00:13:55.949205 systemd-networkd[1376]: vxlan.calico: Gained carrier Jul 12 00:13:56.006654 kubelet[2601]: E0712 00:13:56.006533 2601 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 12 00:13:56.007806 kubelet[2601]: E0712 00:13:56.007618 2601 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9f60756-caec-4fc1-b049-47a746ac0164-whisker-backend-key-pair podName:d9f60756-caec-4fc1-b049-47a746ac0164 nodeName:}" failed. No retries permitted until 2025-07-12 00:13:56.507576499 +0000 UTC m=+38.084569603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/d9f60756-caec-4fc1-b049-47a746ac0164-whisker-backend-key-pair") pod "whisker-6468dbcb8b-68qt6" (UID: "d9f60756-caec-4fc1-b049-47a746ac0164") : failed to sync secret cache: timed out waiting for the condition Jul 12 00:13:56.582856 kubelet[2601]: I0712 00:13:56.582667 2601 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d576838-2540-49b2-98fa-baaffb730d5f" path="/var/lib/kubelet/pods/4d576838-2540-49b2-98fa-baaffb730d5f/volumes" Jul 12 00:13:56.671179 containerd[1485]: time="2025-07-12T00:13:56.671126447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6468dbcb8b-68qt6,Uid:d9f60756-caec-4fc1-b049-47a746ac0164,Namespace:calico-system,Attempt:0,}" Jul 12 00:13:56.847006 systemd-networkd[1376]: cali6b22d01ca32: Link UP Jul 12 00:13:56.850439 systemd-networkd[1376]: cali6b22d01ca32: Gained carrier Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.738 [INFO][3904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0 whisker-6468dbcb8b- calico-system d9f60756-caec-4fc1-b049-47a746ac0164 903 0 2025-07-12 00:13:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6468dbcb8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-n-8926aa35a3 whisker-6468dbcb8b-68qt6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6b22d01ca32 [] [] }} ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Namespace="calico-system" Pod="whisker-6468dbcb8b-68qt6" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.739 [INFO][3904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Namespace="calico-system" Pod="whisker-6468dbcb8b-68qt6" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.769 [INFO][3915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" HandleID="k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.769 [INFO][3915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" HandleID="k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-8926aa35a3", "pod":"whisker-6468dbcb8b-68qt6", "timestamp":"2025-07-12 00:13:56.769520189 +0000 UTC"}, Hostname:"ci-4081-3-4-n-8926aa35a3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.769 [INFO][3915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.769 [INFO][3915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.769 [INFO][3915] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-8926aa35a3' Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.785 [INFO][3915] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.795 [INFO][3915] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.803 [INFO][3915] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.806 [INFO][3915] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.810 [INFO][3915] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.810 [INFO][3915] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.812 [INFO][3915] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.818 [INFO][3915] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.829 [INFO][3915] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.98.65/26] block=192.168.98.64/26 handle="k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.829 [INFO][3915] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.65/26] handle="k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.829 [INFO][3915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:13:56.874054 containerd[1485]: 2025-07-12 00:13:56.829 [INFO][3915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.65/26] IPv6=[] ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" HandleID="k8s-pod-network.f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" Jul 12 00:13:56.875200 containerd[1485]: 2025-07-12 00:13:56.833 [INFO][3904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Namespace="calico-system" Pod="whisker-6468dbcb8b-68qt6" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0", GenerateName:"whisker-6468dbcb8b-", Namespace:"calico-system", SelfLink:"", UID:"d9f60756-caec-4fc1-b049-47a746ac0164", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6468dbcb8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"", Pod:"whisker-6468dbcb8b-68qt6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6b22d01ca32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:56.875200 containerd[1485]: 2025-07-12 00:13:56.833 [INFO][3904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.65/32] ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Namespace="calico-system" Pod="whisker-6468dbcb8b-68qt6" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" Jul 12 00:13:56.875200 containerd[1485]: 2025-07-12 00:13:56.834 [INFO][3904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b22d01ca32 ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Namespace="calico-system" Pod="whisker-6468dbcb8b-68qt6" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" Jul 12 00:13:56.875200 containerd[1485]: 2025-07-12 00:13:56.851 [INFO][3904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Namespace="calico-system" Pod="whisker-6468dbcb8b-68qt6" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" Jul 12 00:13:56.875200 containerd[1485]: 2025-07-12 00:13:56.852 [INFO][3904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Namespace="calico-system" Pod="whisker-6468dbcb8b-68qt6" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0", GenerateName:"whisker-6468dbcb8b-", Namespace:"calico-system", SelfLink:"", UID:"d9f60756-caec-4fc1-b049-47a746ac0164", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6468dbcb8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a", Pod:"whisker-6468dbcb8b-68qt6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6b22d01ca32", MAC:"ea:39:6f:c0:25:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:13:56.875200 containerd[1485]: 2025-07-12 00:13:56.868 [INFO][3904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a" Namespace="calico-system" Pod="whisker-6468dbcb8b-68qt6" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--6468dbcb8b--68qt6-eth0" Jul 12 00:13:56.901527 containerd[1485]: time="2025-07-12T00:13:56.901005413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:13:56.901527 containerd[1485]: time="2025-07-12T00:13:56.901062054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:13:56.901527 containerd[1485]: time="2025-07-12T00:13:56.901073855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:56.901527 containerd[1485]: time="2025-07-12T00:13:56.901160377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:13:56.942040 systemd[1]: Started cri-containerd-f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a.scope - libcontainer container f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a. Jul 12 00:13:56.988212 containerd[1485]: time="2025-07-12T00:13:56.988146483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6468dbcb8b-68qt6,Uid:d9f60756-caec-4fc1-b049-47a746ac0164,Namespace:calico-system,Attempt:0,} returns sandbox id \"f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a\"" Jul 12 00:13:56.997564 containerd[1485]: time="2025-07-12T00:13:56.997462229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 12 00:13:57.254845 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jul 12 00:13:58.306831 containerd[1485]: time="2025-07-12T00:13:58.305304794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:58.306831 containerd[1485]: time="2025-07-12T00:13:58.306681787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 12 00:13:58.307705 containerd[1485]: time="2025-07-12T00:13:58.307657490Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:58.310986 containerd[1485]: time="2025-07-12T00:13:58.310940288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:13:58.312168 containerd[1485]: time="2025-07-12T00:13:58.312117516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.314573446s" Jul 12 00:13:58.312168 containerd[1485]: time="2025-07-12T00:13:58.312163037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 12 00:13:58.320646 containerd[1485]: time="2025-07-12T00:13:58.320577997Z" level=info msg="CreateContainer within sandbox \"f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 12 00:13:58.345092 containerd[1485]: time="2025-07-12T00:13:58.344976217Z" level=info msg="CreateContainer within sandbox \"f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f0b62ad3d42fc27b8ab369eaab315e67b0a31f4b9cb55dcc908e07a0152627c9\"" Jul 12 00:13:58.346926 containerd[1485]: time="2025-07-12T00:13:58.346855061Z" level=info msg="StartContainer for \"f0b62ad3d42fc27b8ab369eaab315e67b0a31f4b9cb55dcc908e07a0152627c9\"" Jul 12 00:13:58.391071 systemd[1]: Started cri-containerd-f0b62ad3d42fc27b8ab369eaab315e67b0a31f4b9cb55dcc908e07a0152627c9.scope - libcontainer container f0b62ad3d42fc27b8ab369eaab315e67b0a31f4b9cb55dcc908e07a0152627c9. Jul 12 00:13:58.438682 containerd[1485]: time="2025-07-12T00:13:58.436632473Z" level=info msg="StartContainer for \"f0b62ad3d42fc27b8ab369eaab315e67b0a31f4b9cb55dcc908e07a0152627c9\" returns successfully" Jul 12 00:13:58.439957 containerd[1485]: time="2025-07-12T00:13:58.439916231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 12 00:13:58.534128 systemd-networkd[1376]: cali6b22d01ca32: Gained IPv6LL Jul 12 00:14:00.197620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925695427.mount: Deactivated successfully. Jul 12 00:14:00.226982 containerd[1485]: time="2025-07-12T00:14:00.226886603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:00.228680 containerd[1485]: time="2025-07-12T00:14:00.228584522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 12 00:14:00.229543 containerd[1485]: time="2025-07-12T00:14:00.229464783Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:00.232730 containerd[1485]: time="2025-07-12T00:14:00.232657657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:00.233934 containerd[1485]: time="2025-07-12T00:14:00.233133549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.793173316s" Jul 12 00:14:00.233934 containerd[1485]: time="2025-07-12T00:14:00.233172030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 12 00:14:00.238511 containerd[1485]: time="2025-07-12T00:14:00.237499050Z" level=info msg="CreateContainer within sandbox \"f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 12 00:14:00.264781 containerd[1485]: time="2025-07-12T00:14:00.264120512Z" level=info msg="CreateContainer within sandbox \"f08e2a9a630fe7c5286a218078ec0a7b38ad055a4bbb4510936686bdc13d974a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a5c73fd804a6f673d7ba46c8866e0dabd059481604cd021734f359345e1aa714\"" Jul 12 00:14:00.265291 containerd[1485]: time="2025-07-12T00:14:00.265205577Z" level=info msg="StartContainer for \"a5c73fd804a6f673d7ba46c8866e0dabd059481604cd021734f359345e1aa714\"" Jul 12 00:14:00.308104 systemd[1]: Started cri-containerd-a5c73fd804a6f673d7ba46c8866e0dabd059481604cd021734f359345e1aa714.scope - libcontainer container a5c73fd804a6f673d7ba46c8866e0dabd059481604cd021734f359345e1aa714. Jul 12 00:14:00.351921 containerd[1485]: time="2025-07-12T00:14:00.351785078Z" level=info msg="StartContainer for \"a5c73fd804a6f673d7ba46c8866e0dabd059481604cd021734f359345e1aa714\" returns successfully" Jul 12 00:14:00.576478 containerd[1485]: time="2025-07-12T00:14:00.575127730Z" level=info msg="StopPodSandbox for \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\"" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.649 [INFO][4073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.649 [INFO][4073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" iface="eth0" netns="/var/run/netns/cni-a1d5cc25-42a0-8d36-f2ba-19db90d63ccf" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.650 [INFO][4073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" iface="eth0" netns="/var/run/netns/cni-a1d5cc25-42a0-8d36-f2ba-19db90d63ccf" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.650 [INFO][4073] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" iface="eth0" netns="/var/run/netns/cni-a1d5cc25-42a0-8d36-f2ba-19db90d63ccf" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.650 [INFO][4073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.650 [INFO][4073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.675 [INFO][4081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.675 [INFO][4081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.675 [INFO][4081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.692 [WARNING][4081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.692 [INFO][4081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.694 [INFO][4081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:00.698569 containerd[1485]: 2025-07-12 00:14:00.696 [INFO][4073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:00.699280 containerd[1485]: time="2025-07-12T00:14:00.698740535Z" level=info msg="TearDown network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\" successfully" Jul 12 00:14:00.699280 containerd[1485]: time="2025-07-12T00:14:00.698818536Z" level=info msg="StopPodSandbox for \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\" returns successfully" Jul 12 00:14:00.699610 containerd[1485]: time="2025-07-12T00:14:00.699573994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcd7b9b6d-97z8q,Uid:e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:14:00.790368 systemd[1]: run-netns-cni\x2da1d5cc25\x2d42a0\x2d8d36\x2df2ba\x2d19db90d63ccf.mount: Deactivated successfully. Jul 12 00:14:00.833205 kubelet[2601]: I0712 00:14:00.833017 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6468dbcb8b-68qt6" podStartSLOduration=3.592281307 podStartE2EDuration="6.832993428s" podCreationTimestamp="2025-07-12 00:13:54 +0000 UTC" firstStartedPulling="2025-07-12 00:13:56.993481252 +0000 UTC m=+38.570474396" lastFinishedPulling="2025-07-12 00:14:00.234193453 +0000 UTC m=+41.811186517" observedRunningTime="2025-07-12 00:14:00.832139808 +0000 UTC m=+42.409132912" watchObservedRunningTime="2025-07-12 00:14:00.832993428 +0000 UTC m=+42.409986532" Jul 12 00:14:00.884295 systemd-networkd[1376]: calief19aedaaa3: Link UP Jul 12 00:14:00.885990 systemd-networkd[1376]: calief19aedaaa3: Gained carrier Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.763 [INFO][4089] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0 calico-apiserver-5bcd7b9b6d- calico-apiserver e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5 930 0 2025-07-12 00:13:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bcd7b9b6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-n-8926aa35a3 calico-apiserver-5bcd7b9b6d-97z8q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calief19aedaaa3 [] [] }} ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-97z8q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.764 [INFO][4089] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-97z8q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.800 [INFO][4100] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" HandleID="k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.800 [INFO][4100] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" HandleID="k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-n-8926aa35a3", "pod":"calico-apiserver-5bcd7b9b6d-97z8q", "timestamp":"2025-07-12 00:14:00.800286824 +0000 UTC"}, Hostname:"ci-4081-3-4-n-8926aa35a3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.800 [INFO][4100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.800 [INFO][4100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.800 [INFO][4100] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-8926aa35a3' Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.819 [INFO][4100] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.827 [INFO][4100] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.844 [INFO][4100] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.853 [INFO][4100] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.857 [INFO][4100] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.858 [INFO][4100] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.860 [INFO][4100] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139 Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.866 [INFO][4100] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.876 [INFO][4100] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.98.66/26] block=192.168.98.64/26 handle="k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.876 [INFO][4100] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.66/26] handle="k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.876 [INFO][4100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:00.907574 containerd[1485]: 2025-07-12 00:14:00.877 [INFO][4100] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.66/26] IPv6=[] ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" HandleID="k8s-pod-network.2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.914139 containerd[1485]: 2025-07-12 00:14:00.879 [INFO][4089] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-97z8q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0", GenerateName:"calico-apiserver-5bcd7b9b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcd7b9b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"", Pod:"calico-apiserver-5bcd7b9b6d-97z8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief19aedaaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:00.914139 containerd[1485]: 2025-07-12 00:14:00.879 [INFO][4089] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.66/32] ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-97z8q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.914139 containerd[1485]: 2025-07-12 00:14:00.879 [INFO][4089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief19aedaaa3 ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-97z8q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.914139 containerd[1485]: 2025-07-12 00:14:00.885 [INFO][4089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-97z8q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.914139 containerd[1485]: 2025-07-12 00:14:00.887 [INFO][4089] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-97z8q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0", GenerateName:"calico-apiserver-5bcd7b9b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcd7b9b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139", Pod:"calico-apiserver-5bcd7b9b6d-97z8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief19aedaaa3", MAC:"52:5a:a0:2d:ce:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:00.914139 containerd[1485]: 2025-07-12 00:14:00.900 [INFO][4089] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-97z8q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:00.945417 containerd[1485]: time="2025-07-12T00:14:00.945269648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:14:00.945417 containerd[1485]: time="2025-07-12T00:14:00.945358090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:14:00.945417 containerd[1485]: time="2025-07-12T00:14:00.945371330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:00.945745 containerd[1485]: time="2025-07-12T00:14:00.945688818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:00.976048 systemd[1]: Started cri-containerd-2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139.scope - libcontainer container 2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139. Jul 12 00:14:01.025149 containerd[1485]: time="2025-07-12T00:14:01.025093426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcd7b9b6d-97z8q,Uid:e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139\"" Jul 12 00:14:01.028220 containerd[1485]: time="2025-07-12T00:14:01.028010254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:14:01.575147 containerd[1485]: time="2025-07-12T00:14:01.575001533Z" level=info msg="StopPodSandbox for \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\"" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.652 [INFO][4174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.652 [INFO][4174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" iface="eth0" netns="/var/run/netns/cni-d35d18b1-15d2-3133-c433-be5c6dce2e17" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.653 [INFO][4174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" iface="eth0" netns="/var/run/netns/cni-d35d18b1-15d2-3133-c433-be5c6dce2e17" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.654 [INFO][4174] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" iface="eth0" netns="/var/run/netns/cni-d35d18b1-15d2-3133-c433-be5c6dce2e17" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.654 [INFO][4174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.654 [INFO][4174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.680 [INFO][4182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.680 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.680 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.697 [WARNING][4182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.697 [INFO][4182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.699 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:01.703200 containerd[1485]: 2025-07-12 00:14:01.701 [INFO][4174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:01.706013 containerd[1485]: time="2025-07-12T00:14:01.705928447Z" level=info msg="TearDown network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\" successfully" Jul 12 00:14:01.706013 containerd[1485]: time="2025-07-12T00:14:01.705978208Z" level=info msg="StopPodSandbox for \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\" returns successfully" Jul 12 00:14:01.706965 containerd[1485]: time="2025-07-12T00:14:01.706764783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcd7b9b6d-g2glf,Uid:263f7977-4a38-4140-adcf-d1a6d16328ea,Namespace:calico-apiserver,Attempt:1,}" Jul 12 00:14:01.707040 systemd[1]: run-netns-cni\x2dd35d18b1\x2d15d2\x2d3133\x2dc433\x2dbe5c6dce2e17.mount: Deactivated successfully. Jul 12 00:14:01.894576 systemd-networkd[1376]: cali178da9e3224: Link UP Jul 12 00:14:01.896582 systemd-networkd[1376]: cali178da9e3224: Gained carrier Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.775 [INFO][4189] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0 calico-apiserver-5bcd7b9b6d- calico-apiserver 263f7977-4a38-4140-adcf-d1a6d16328ea 946 0 2025-07-12 00:13:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bcd7b9b6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-n-8926aa35a3 calico-apiserver-5bcd7b9b6d-g2glf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali178da9e3224 [] [] }} ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-g2glf" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.775 [INFO][4189] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-g2glf" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.815 [INFO][4201] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" HandleID="k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.815 [INFO][4201] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" HandleID="k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-n-8926aa35a3", "pod":"calico-apiserver-5bcd7b9b6d-g2glf", "timestamp":"2025-07-12 00:14:01.814759083 +0000 UTC"}, Hostname:"ci-4081-3-4-n-8926aa35a3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.815 [INFO][4201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.816 [INFO][4201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.816 [INFO][4201] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-8926aa35a3' Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.835 [INFO][4201] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.844 [INFO][4201] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.851 [INFO][4201] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.856 [INFO][4201] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.860 [INFO][4201] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.860 [INFO][4201] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.865 [INFO][4201] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828 Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.874 [INFO][4201] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.889 [INFO][4201] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.98.67/26] block=192.168.98.64/26 handle="k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.889 [INFO][4201] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.67/26] handle="k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.889 [INFO][4201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:01.916653 containerd[1485]: 2025-07-12 00:14:01.889 [INFO][4201] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.67/26] IPv6=[] ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" HandleID="k8s-pod-network.eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.918313 containerd[1485]: 2025-07-12 00:14:01.891 [INFO][4189] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-g2glf" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0", GenerateName:"calico-apiserver-5bcd7b9b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"263f7977-4a38-4140-adcf-d1a6d16328ea", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcd7b9b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"", Pod:"calico-apiserver-5bcd7b9b6d-g2glf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali178da9e3224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:01.918313 containerd[1485]: 2025-07-12 00:14:01.892 [INFO][4189] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.67/32] ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-g2glf" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.918313 containerd[1485]: 2025-07-12 00:14:01.892 [INFO][4189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali178da9e3224 ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-g2glf" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.918313 containerd[1485]: 2025-07-12 00:14:01.894 [INFO][4189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-g2glf" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.918313 containerd[1485]: 2025-07-12 00:14:01.895 [INFO][4189] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-g2glf" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0", GenerateName:"calico-apiserver-5bcd7b9b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"263f7977-4a38-4140-adcf-d1a6d16328ea", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcd7b9b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828", Pod:"calico-apiserver-5bcd7b9b6d-g2glf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali178da9e3224", MAC:"5a:9b:d1:a8:dd:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:01.918313 containerd[1485]: 2025-07-12 00:14:01.913 [INFO][4189] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828" Namespace="calico-apiserver" Pod="calico-apiserver-5bcd7b9b6d-g2glf" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:01.943458 containerd[1485]: time="2025-07-12T00:14:01.942718019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:14:01.943458 containerd[1485]: time="2025-07-12T00:14:01.943359391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:14:01.943458 containerd[1485]: time="2025-07-12T00:14:01.943380152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:01.943709 containerd[1485]: time="2025-07-12T00:14:01.943594316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:01.973085 systemd[1]: Started cri-containerd-eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828.scope - libcontainer container eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828. Jul 12 00:14:02.017980 containerd[1485]: time="2025-07-12T00:14:02.017838404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bcd7b9b6d-g2glf,Uid:263f7977-4a38-4140-adcf-d1a6d16328ea,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828\"" Jul 12 00:14:02.886148 systemd-networkd[1376]: calief19aedaaa3: Gained IPv6LL Jul 12 00:14:02.960859 containerd[1485]: time="2025-07-12T00:14:02.960741203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:02.962426 containerd[1485]: time="2025-07-12T00:14:02.962351188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 12 00:14:02.964036 containerd[1485]: time="2025-07-12T00:14:02.963917052Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:02.966991 containerd[1485]: time="2025-07-12T00:14:02.966888664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:02.968423 containerd[1485]: time="2025-07-12T00:14:02.967899454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.939806919s" Jul 12 00:14:02.968423 containerd[1485]: time="2025-07-12T00:14:02.967940933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:14:02.972832 containerd[1485]: time="2025-07-12T00:14:02.970169752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 12 00:14:02.973550 containerd[1485]: time="2025-07-12T00:14:02.973266002Z" level=info msg="CreateContainer within sandbox \"2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:14:02.989875 containerd[1485]: time="2025-07-12T00:14:02.989832922Z" level=info msg="CreateContainer within sandbox \"2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff085f1557dfc94c03e3ad5e9af048a9e7cc47bb48c7af1ba2401be82d3d19ad\"" Jul 12 00:14:02.991867 containerd[1485]: time="2025-07-12T00:14:02.990993190Z" level=info msg="StartContainer for \"ff085f1557dfc94c03e3ad5e9af048a9e7cc47bb48c7af1ba2401be82d3d19ad\"" Jul 12 00:14:03.038334 systemd[1]: Started cri-containerd-ff085f1557dfc94c03e3ad5e9af048a9e7cc47bb48c7af1ba2401be82d3d19ad.scope - libcontainer container ff085f1557dfc94c03e3ad5e9af048a9e7cc47bb48c7af1ba2401be82d3d19ad. Jul 12 00:14:03.081831 containerd[1485]: time="2025-07-12T00:14:03.081574902Z" level=info msg="StartContainer for \"ff085f1557dfc94c03e3ad5e9af048a9e7cc47bb48c7af1ba2401be82d3d19ad\" returns successfully" Jul 12 00:14:03.270187 systemd-networkd[1376]: cali178da9e3224: Gained IPv6LL Jul 12 00:14:03.322576 containerd[1485]: time="2025-07-12T00:14:03.321590380Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:03.324489 containerd[1485]: time="2025-07-12T00:14:03.324441675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 12 00:14:03.326898 containerd[1485]: time="2025-07-12T00:14:03.326855413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 356.637701ms" Jul 12 00:14:03.327044 containerd[1485]: time="2025-07-12T00:14:03.327027412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 12 00:14:03.331483 containerd[1485]: time="2025-07-12T00:14:03.331421533Z" level=info msg="CreateContainer within sandbox \"eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 12 00:14:03.353160 containerd[1485]: time="2025-07-12T00:14:03.353027302Z" level=info msg="CreateContainer within sandbox \"eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a402080c30eeff0016cb4025da7a79df8a8dc7845f7aa6c3672eb36e6e9a2aa8\"" Jul 12 00:14:03.361211 containerd[1485]: time="2025-07-12T00:14:03.356593351Z" level=info msg="StartContainer for \"a402080c30eeff0016cb4025da7a79df8a8dc7845f7aa6c3672eb36e6e9a2aa8\"" Jul 12 00:14:03.358640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144183525.mount: Deactivated successfully. Jul 12 00:14:03.397045 systemd[1]: Started cri-containerd-a402080c30eeff0016cb4025da7a79df8a8dc7845f7aa6c3672eb36e6e9a2aa8.scope - libcontainer container a402080c30eeff0016cb4025da7a79df8a8dc7845f7aa6c3672eb36e6e9a2aa8. Jul 12 00:14:03.458107 containerd[1485]: time="2025-07-12T00:14:03.458011854Z" level=info msg="StartContainer for \"a402080c30eeff0016cb4025da7a79df8a8dc7845f7aa6c3672eb36e6e9a2aa8\" returns successfully" Jul 12 00:14:03.486655 kubelet[2601]: I0712 00:14:03.486601 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:14:03.575486 containerd[1485]: time="2025-07-12T00:14:03.575438336Z" level=info msg="StopPodSandbox for \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\"" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.639 [INFO][4375] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.644 [INFO][4375] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" iface="eth0" netns="/var/run/netns/cni-08bf8957-0433-6f5d-ffa8-f4722e1800f1" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.645 [INFO][4375] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" iface="eth0" netns="/var/run/netns/cni-08bf8957-0433-6f5d-ffa8-f4722e1800f1" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.645 [INFO][4375] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" iface="eth0" netns="/var/run/netns/cni-08bf8957-0433-6f5d-ffa8-f4722e1800f1" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.645 [INFO][4375] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.645 [INFO][4375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.677 [INFO][4383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.677 [INFO][4383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.677 [INFO][4383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.688 [WARNING][4383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.688 [INFO][4383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.691 [INFO][4383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:03.697087 containerd[1485]: 2025-07-12 00:14:03.693 [INFO][4375] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:03.697628 containerd[1485]: time="2025-07-12T00:14:03.697424338Z" level=info msg="TearDown network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\" successfully" Jul 12 00:14:03.697628 containerd[1485]: time="2025-07-12T00:14:03.697457057Z" level=info msg="StopPodSandbox for \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\" returns successfully" Jul 12 00:14:03.698306 containerd[1485]: time="2025-07-12T00:14:03.698263970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-25c9q,Uid:8dcaf788-c931-4d5f-8dbb-aa867fceaa4c,Namespace:calico-system,Attempt:1,}" Jul 12 00:14:03.875207 kubelet[2601]: I0712 00:14:03.874818 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-g2glf" podStartSLOduration=27.566078157 podStartE2EDuration="28.87478769s" podCreationTimestamp="2025-07-12 00:13:35 +0000 UTC" firstStartedPulling="2025-07-12 00:14:02.019724226 +0000 UTC m=+43.596717330" lastFinishedPulling="2025-07-12 00:14:03.328433759 +0000 UTC m=+44.905426863" observedRunningTime="2025-07-12 00:14:03.855390741 +0000 UTC m=+45.432383845" watchObservedRunningTime="2025-07-12 00:14:03.87478769 +0000 UTC m=+45.451780834" Jul 12 00:14:03.953838 systemd-networkd[1376]: cali5cbb56a0e33: Link UP Jul 12 00:14:03.959395 systemd-networkd[1376]: cali5cbb56a0e33: Gained carrier Jul 12 00:14:03.971667 kubelet[2601]: I0712 00:14:03.971592 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bcd7b9b6d-97z8q" podStartSLOduration=27.029787916 podStartE2EDuration="28.971573194s" podCreationTimestamp="2025-07-12 00:13:35 +0000 UTC" firstStartedPulling="2025-07-12 00:14:01.027516642 +0000 UTC m=+42.604509746" lastFinishedPulling="2025-07-12 00:14:02.96930192 +0000 UTC m=+44.546295024" observedRunningTime="2025-07-12 00:14:03.875330045 +0000 UTC m=+45.452323149" watchObservedRunningTime="2025-07-12 00:14:03.971573194 +0000 UTC m=+45.548566298" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.829 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0 goldmane-58fd7646b9- calico-system 8dcaf788-c931-4d5f-8dbb-aa867fceaa4c 961 0 2025-07-12 00:13:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-n-8926aa35a3 goldmane-58fd7646b9-25c9q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5cbb56a0e33 [] [] }} ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Namespace="calico-system" Pod="goldmane-58fd7646b9-25c9q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.829 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Namespace="calico-system" Pod="goldmane-58fd7646b9-25c9q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.887 [INFO][4419] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" HandleID="k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.887 [INFO][4419] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" HandleID="k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-8926aa35a3", "pod":"goldmane-58fd7646b9-25c9q", "timestamp":"2025-07-12 00:14:03.88613643 +0000 UTC"}, Hostname:"ci-4081-3-4-n-8926aa35a3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.887 [INFO][4419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.887 [INFO][4419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.887 [INFO][4419] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-8926aa35a3' Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.902 [INFO][4419] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.908 [INFO][4419] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.915 [INFO][4419] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.917 [INFO][4419] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.922 [INFO][4419] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.922 [INFO][4419] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.924 [INFO][4419] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.930 [INFO][4419] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.936 [INFO][4419] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.98.68/26] block=192.168.98.64/26 handle="k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.936 [INFO][4419] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.68/26] handle="k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.936 [INFO][4419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:03.977043 containerd[1485]: 2025-07-12 00:14:03.936 [INFO][4419] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.68/26] IPv6=[] ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" HandleID="k8s-pod-network.438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.979125 containerd[1485]: 2025-07-12 00:14:03.939 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Namespace="calico-system" Pod="goldmane-58fd7646b9-25c9q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"", Pod:"goldmane-58fd7646b9-25c9q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5cbb56a0e33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:03.979125 containerd[1485]: 2025-07-12 00:14:03.939 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.68/32] ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Namespace="calico-system" Pod="goldmane-58fd7646b9-25c9q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.979125 containerd[1485]: 2025-07-12 00:14:03.939 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cbb56a0e33 ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Namespace="calico-system" Pod="goldmane-58fd7646b9-25c9q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.979125 containerd[1485]: 2025-07-12 00:14:03.953 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Namespace="calico-system" Pod="goldmane-58fd7646b9-25c9q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.979125 containerd[1485]: 2025-07-12 00:14:03.954 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Namespace="calico-system" Pod="goldmane-58fd7646b9-25c9q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca", Pod:"goldmane-58fd7646b9-25c9q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5cbb56a0e33", MAC:"42:ac:76:f8:42:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:03.979125 containerd[1485]: 2025-07-12 00:14:03.973 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca" Namespace="calico-system" Pod="goldmane-58fd7646b9-25c9q" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:03.992202 systemd[1]: run-netns-cni\x2d08bf8957\x2d0433\x2d6f5d\x2dffa8\x2df4722e1800f1.mount: Deactivated successfully. Jul 12 00:14:04.008568 containerd[1485]: time="2025-07-12T00:14:04.008253356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:14:04.008568 containerd[1485]: time="2025-07-12T00:14:04.008322476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:14:04.008568 containerd[1485]: time="2025-07-12T00:14:04.008347555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:04.008568 containerd[1485]: time="2025-07-12T00:14:04.008458395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:04.051365 systemd[1]: run-containerd-runc-k8s.io-438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca-runc.9pJxan.mount: Deactivated successfully. Jul 12 00:14:04.064735 systemd[1]: Started cri-containerd-438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca.scope - libcontainer container 438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca. Jul 12 00:14:04.147970 containerd[1485]: time="2025-07-12T00:14:04.147463118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-25c9q,Uid:8dcaf788-c931-4d5f-8dbb-aa867fceaa4c,Namespace:calico-system,Attempt:1,} returns sandbox id \"438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca\"" Jul 12 00:14:04.151244 containerd[1485]: time="2025-07-12T00:14:04.151020650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 12 00:14:04.576677 containerd[1485]: time="2025-07-12T00:14:04.576029556Z" level=info msg="StopPodSandbox for \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\"" Jul 12 00:14:04.578471 containerd[1485]: time="2025-07-12T00:14:04.577390185Z" level=info msg="StopPodSandbox for \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\"" Jul 12 00:14:04.580343 containerd[1485]: time="2025-07-12T00:14:04.580299682Z" level=info msg="StopPodSandbox for \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\"" Jul 12 00:14:04.583598 containerd[1485]: time="2025-07-12T00:14:04.583411737Z" level=info msg="StopPodSandbox for \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\"" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.713 [INFO][4529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.713 [INFO][4529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" iface="eth0" netns="/var/run/netns/cni-f8101c0b-30a4-2556-511a-09bde0f5ae86" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.714 [INFO][4529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" iface="eth0" netns="/var/run/netns/cni-f8101c0b-30a4-2556-511a-09bde0f5ae86" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.714 [INFO][4529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" iface="eth0" netns="/var/run/netns/cni-f8101c0b-30a4-2556-511a-09bde0f5ae86" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.714 [INFO][4529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.714 [INFO][4529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.743 [INFO][4546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.743 [INFO][4546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.743 [INFO][4546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.765 [WARNING][4546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.765 [INFO][4546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.771 [INFO][4546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:04.778824 containerd[1485]: 2025-07-12 00:14:04.773 [INFO][4529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:04.780481 containerd[1485]: time="2025-07-12T00:14:04.779872119Z" level=info msg="TearDown network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\" successfully" Jul 12 00:14:04.780481 containerd[1485]: time="2025-07-12T00:14:04.779903439Z" level=info msg="StopPodSandbox for \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\" returns successfully" Jul 12 00:14:04.783976 containerd[1485]: time="2025-07-12T00:14:04.782163141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rnjg4,Uid:f208ae55-8f4f-4d14-a26a-564c62f2524f,Namespace:kube-system,Attempt:1,}" Jul 12 00:14:04.784535 systemd[1]: run-netns-cni\x2df8101c0b\x2d30a4\x2d2556\x2d511a\x2d09bde0f5ae86.mount: Deactivated successfully. Jul 12 00:14:04.858490 kubelet[2601]: I0712 00:14:04.858375 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:14:04.861000 kubelet[2601]: I0712 00:14:04.858818 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.768 [INFO][4527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.777 [INFO][4527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" iface="eth0" netns="/var/run/netns/cni-1201d03f-bdda-c141-f533-ecaf44d2eec1" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.778 [INFO][4527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" iface="eth0" netns="/var/run/netns/cni-1201d03f-bdda-c141-f533-ecaf44d2eec1" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.778 [INFO][4527] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" iface="eth0" netns="/var/run/netns/cni-1201d03f-bdda-c141-f533-ecaf44d2eec1" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.778 [INFO][4527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.778 [INFO][4527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.871 [INFO][4556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.872 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.873 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.889 [WARNING][4556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.889 [INFO][4556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.896 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:04.913594 containerd[1485]: 2025-07-12 00:14:04.908 [INFO][4527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:04.915858 containerd[1485]: time="2025-07-12T00:14:04.913921443Z" level=info msg="TearDown network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\" successfully" Jul 12 00:14:04.915858 containerd[1485]: time="2025-07-12T00:14:04.913994482Z" level=info msg="StopPodSandbox for \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\" returns successfully" Jul 12 00:14:04.915905 containerd[1485]: time="2025-07-12T00:14:04.915865227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-694bc47746-lklc8,Uid:a5d34381-7fee-470d-b68b-74c007d52fdd,Namespace:calico-system,Attempt:1,}" Jul 12 00:14:04.990113 systemd[1]: run-netns-cni\x2d1201d03f\x2dbdda\x2dc141\x2df533\x2decaf44d2eec1.mount: Deactivated successfully. Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.812 [INFO][4528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.812 [INFO][4528] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" iface="eth0" netns="/var/run/netns/cni-613f70fb-4956-9296-840f-7e723571675b" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.813 [INFO][4528] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" iface="eth0" netns="/var/run/netns/cni-613f70fb-4956-9296-840f-7e723571675b" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.813 [INFO][4528] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" iface="eth0" netns="/var/run/netns/cni-613f70fb-4956-9296-840f-7e723571675b" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.813 [INFO][4528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.813 [INFO][4528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.940 [INFO][4561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.942 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.942 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.962 [WARNING][4561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.962 [INFO][4561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.964 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:04.992573 containerd[1485]: 2025-07-12 00:14:04.970 [INFO][4528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:04.992573 containerd[1485]: time="2025-07-12T00:14:04.992144534Z" level=info msg="TearDown network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\" successfully" Jul 12 00:14:04.992573 containerd[1485]: time="2025-07-12T00:14:04.992172054Z" level=info msg="StopPodSandbox for \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\" returns successfully" Jul 12 00:14:05.001171 containerd[1485]: time="2025-07-12T00:14:04.995418508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twcjp,Uid:c9d45977-08ad-4a73-90ca-4efb866e9fdb,Namespace:calico-system,Attempt:1,}" Jul 12 00:14:04.996461 systemd[1]: run-netns-cni\x2d613f70fb\x2d4956\x2d9296\x2d840f\x2d7e723571675b.mount: Deactivated successfully. Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.818 [INFO][4522] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.818 [INFO][4522] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" iface="eth0" netns="/var/run/netns/cni-c78c7ede-412a-b6f9-0d5b-e24b9bad3c95" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.819 [INFO][4522] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" iface="eth0" netns="/var/run/netns/cni-c78c7ede-412a-b6f9-0d5b-e24b9bad3c95" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.821 [INFO][4522] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" iface="eth0" netns="/var/run/netns/cni-c78c7ede-412a-b6f9-0d5b-e24b9bad3c95" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.821 [INFO][4522] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.821 [INFO][4522] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.960 [INFO][4566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.962 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:04.964 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:05.004 [WARNING][4566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:05.004 [INFO][4566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:05.009 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:05.027128 containerd[1485]: 2025-07-12 00:14:05.020 [INFO][4522] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:05.032263 containerd[1485]: time="2025-07-12T00:14:05.032013199Z" level=info msg="TearDown network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\" successfully" Jul 12 00:14:05.032263 containerd[1485]: time="2025-07-12T00:14:05.032051278Z" level=info msg="StopPodSandbox for \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\" returns successfully" Jul 12 00:14:05.032248 systemd[1]: run-netns-cni\x2dc78c7ede\x2d412a\x2db6f9\x2d0d5b\x2de24b9bad3c95.mount: Deactivated successfully. Jul 12 00:14:05.034488 containerd[1485]: time="2025-07-12T00:14:05.033997704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pq9dw,Uid:fd55186a-824e-478a-8f31-23c3f2558e58,Namespace:kube-system,Attempt:1,}" Jul 12 00:14:05.215134 systemd-networkd[1376]: cali7126c2999e2: Link UP Jul 12 00:14:05.218964 systemd-networkd[1376]: cali7126c2999e2: Gained carrier Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:04.988 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0 coredns-7c65d6cfc9- kube-system f208ae55-8f4f-4d14-a26a-564c62f2524f 979 0 2025-07-12 00:13:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-n-8926aa35a3 coredns-7c65d6cfc9-rnjg4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7126c2999e2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rnjg4" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:04.988 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rnjg4" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.092 [INFO][4602] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" HandleID="k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.100 [INFO][4602] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" HandleID="k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000314cf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-n-8926aa35a3", "pod":"coredns-7c65d6cfc9-rnjg4", "timestamp":"2025-07-12 00:14:05.089584181 +0000 UTC"}, Hostname:"ci-4081-3-4-n-8926aa35a3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.101 [INFO][4602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.101 [INFO][4602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.101 [INFO][4602] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-8926aa35a3' Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.122 [INFO][4602] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.131 [INFO][4602] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.150 [INFO][4602] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.158 [INFO][4602] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.161 [INFO][4602] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.161 [INFO][4602] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.172 [INFO][4602] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228 Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.180 [INFO][4602] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.197 [INFO][4602] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.98.69/26] block=192.168.98.64/26 handle="k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.197 [INFO][4602] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.69/26] handle="k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.197 [INFO][4602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:05.269906 containerd[1485]: 2025-07-12 00:14:05.197 [INFO][4602] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.69/26] IPv6=[] ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" HandleID="k8s-pod-network.e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:05.270517 containerd[1485]: 2025-07-12 00:14:05.206 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rnjg4" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f208ae55-8f4f-4d14-a26a-564c62f2524f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"", Pod:"coredns-7c65d6cfc9-rnjg4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7126c2999e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:05.270517 containerd[1485]: 2025-07-12 00:14:05.206 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.69/32] ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rnjg4" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:05.270517 containerd[1485]: 2025-07-12 00:14:05.206 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7126c2999e2 ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rnjg4" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:05.270517 containerd[1485]: 2025-07-12 00:14:05.223 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rnjg4" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:05.270517 containerd[1485]: 2025-07-12 00:14:05.226 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rnjg4" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f208ae55-8f4f-4d14-a26a-564c62f2524f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228", Pod:"coredns-7c65d6cfc9-rnjg4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7126c2999e2", MAC:"a6:42:63:66:c2:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:05.270517 containerd[1485]: 2025-07-12 00:14:05.258 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rnjg4" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:05.318028 systemd-networkd[1376]: cali5cbb56a0e33: Gained IPv6LL Jul 12 00:14:05.344477 systemd-networkd[1376]: calibd055cf51aa: Link UP Jul 12 00:14:05.347110 systemd-networkd[1376]: calibd055cf51aa: Gained carrier Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.117 [INFO][4590] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0 calico-kube-controllers-694bc47746- calico-system a5d34381-7fee-470d-b68b-74c007d52fdd 980 0 2025-07-12 00:13:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:694bc47746 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-n-8926aa35a3 calico-kube-controllers-694bc47746-lklc8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibd055cf51aa [] [] }} ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Namespace="calico-system" Pod="calico-kube-controllers-694bc47746-lklc8" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.117 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Namespace="calico-system" Pod="calico-kube-controllers-694bc47746-lklc8" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.204 [INFO][4632] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" HandleID="k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.205 [INFO][4632] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" HandleID="k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032ad80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-8926aa35a3", "pod":"calico-kube-controllers-694bc47746-lklc8", "timestamp":"2025-07-12 00:14:05.204905345 +0000 UTC"}, Hostname:"ci-4081-3-4-n-8926aa35a3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.205 [INFO][4632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.205 [INFO][4632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.205 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-8926aa35a3' Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.240 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.257 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.276 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.283 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.289 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.289 [INFO][4632] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.296 [INFO][4632] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1 Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.307 [INFO][4632] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.326 [INFO][4632] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.98.70/26] block=192.168.98.64/26 handle="k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.326 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.70/26] handle="k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.326 [INFO][4632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:05.387087 containerd[1485]: 2025-07-12 00:14:05.326 [INFO][4632] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.70/26] IPv6=[] ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" HandleID="k8s-pod-network.123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:05.388539 containerd[1485]: 2025-07-12 00:14:05.339 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Namespace="calico-system" Pod="calico-kube-controllers-694bc47746-lklc8" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0", GenerateName:"calico-kube-controllers-694bc47746-", Namespace:"calico-system", SelfLink:"", UID:"a5d34381-7fee-470d-b68b-74c007d52fdd", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"694bc47746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"", Pod:"calico-kube-controllers-694bc47746-lklc8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd055cf51aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:05.388539 containerd[1485]: 2025-07-12 00:14:05.340 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.70/32] ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Namespace="calico-system" Pod="calico-kube-controllers-694bc47746-lklc8" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:05.388539 containerd[1485]: 2025-07-12 00:14:05.340 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd055cf51aa ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Namespace="calico-system" Pod="calico-kube-controllers-694bc47746-lklc8" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:05.388539 containerd[1485]: 2025-07-12 00:14:05.349 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Namespace="calico-system" Pod="calico-kube-controllers-694bc47746-lklc8" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:05.388539 containerd[1485]: 2025-07-12 00:14:05.354 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Namespace="calico-system" Pod="calico-kube-controllers-694bc47746-lklc8" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0", GenerateName:"calico-kube-controllers-694bc47746-", Namespace:"calico-system", SelfLink:"", UID:"a5d34381-7fee-470d-b68b-74c007d52fdd", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"694bc47746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1", Pod:"calico-kube-controllers-694bc47746-lklc8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd055cf51aa", MAC:"a6:0f:f3:aa:54:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:05.388539 containerd[1485]: 2025-07-12 00:14:05.376 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1" Namespace="calico-system" Pod="calico-kube-controllers-694bc47746-lklc8" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:05.392957 containerd[1485]: time="2025-07-12T00:14:05.392698224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:14:05.392957 containerd[1485]: time="2025-07-12T00:14:05.392754424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:14:05.392957 containerd[1485]: time="2025-07-12T00:14:05.392766064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:05.394380 containerd[1485]: time="2025-07-12T00:14:05.393114101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:05.434382 systemd[1]: Started cri-containerd-e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228.scope - libcontainer container e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228. Jul 12 00:14:05.455431 containerd[1485]: time="2025-07-12T00:14:05.453137266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:14:05.455431 containerd[1485]: time="2025-07-12T00:14:05.453221946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:14:05.455431 containerd[1485]: time="2025-07-12T00:14:05.453239505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:05.455431 containerd[1485]: time="2025-07-12T00:14:05.453359465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:05.475976 systemd-networkd[1376]: cali16073ae89ee: Link UP Jul 12 00:14:05.484373 systemd-networkd[1376]: cali16073ae89ee: Gained carrier Jul 12 00:14:05.507044 systemd[1]: Started cri-containerd-123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1.scope - libcontainer container 123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1. Jul 12 00:14:05.538947 containerd[1485]: time="2025-07-12T00:14:05.538820165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rnjg4,Uid:f208ae55-8f4f-4d14-a26a-564c62f2524f,Namespace:kube-system,Attempt:1,} returns sandbox id \"e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228\"" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.168 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0 csi-node-driver- calico-system c9d45977-08ad-4a73-90ca-4efb866e9fdb 981 0 2025-07-12 00:13:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-n-8926aa35a3 csi-node-driver-twcjp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali16073ae89ee [] [] }} ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Namespace="calico-system" Pod="csi-node-driver-twcjp" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.168 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Namespace="calico-system" Pod="csi-node-driver-twcjp" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.292 [INFO][4641] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" HandleID="k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.295 [INFO][4641] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" HandleID="k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004daa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-n-8926aa35a3", "pod":"csi-node-driver-twcjp", "timestamp":"2025-07-12 00:14:05.29251499 +0000 UTC"}, Hostname:"ci-4081-3-4-n-8926aa35a3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.295 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.328 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.333 [INFO][4641] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-8926aa35a3' Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.364 [INFO][4641] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.391 [INFO][4641] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.404 [INFO][4641] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.413 [INFO][4641] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.423 [INFO][4641] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.423 [INFO][4641] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.429 [INFO][4641] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9 Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.442 [INFO][4641] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.461 [INFO][4641] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.98.71/26] block=192.168.98.64/26 handle="k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.461 [INFO][4641] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.71/26] handle="k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.461 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:05.551766 containerd[1485]: 2025-07-12 00:14:05.461 [INFO][4641] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.71/26] IPv6=[] ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" HandleID="k8s-pod-network.f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:05.552653 containerd[1485]: 2025-07-12 00:14:05.469 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Namespace="calico-system" Pod="csi-node-driver-twcjp" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d45977-08ad-4a73-90ca-4efb866e9fdb", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"", Pod:"csi-node-driver-twcjp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16073ae89ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:05.552653 containerd[1485]: 2025-07-12 00:14:05.469 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.71/32] ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Namespace="calico-system" Pod="csi-node-driver-twcjp" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:05.552653 containerd[1485]: 2025-07-12 00:14:05.469 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16073ae89ee ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Namespace="calico-system" Pod="csi-node-driver-twcjp" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:05.552653 containerd[1485]: 2025-07-12 00:14:05.503 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Namespace="calico-system" Pod="csi-node-driver-twcjp" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:05.552653 containerd[1485]: 2025-07-12 00:14:05.520 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Namespace="calico-system" Pod="csi-node-driver-twcjp" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d45977-08ad-4a73-90ca-4efb866e9fdb", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9", Pod:"csi-node-driver-twcjp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16073ae89ee", MAC:"92:18:1c:ec:39:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:05.552653 containerd[1485]: 2025-07-12 00:14:05.538 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9" Namespace="calico-system" Pod="csi-node-driver-twcjp" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:05.555370 containerd[1485]: time="2025-07-12T00:14:05.554954608Z" level=info msg="CreateContainer within sandbox \"e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:14:05.651719 systemd-networkd[1376]: cali8e1bee9f988: Link UP Jul 12 00:14:05.654875 systemd-networkd[1376]: cali8e1bee9f988: Gained carrier Jul 12 00:14:05.656818 containerd[1485]: time="2025-07-12T00:14:05.626325211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:14:05.656818 containerd[1485]: time="2025-07-12T00:14:05.626382930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:14:05.656818 containerd[1485]: time="2025-07-12T00:14:05.626408450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:05.656818 containerd[1485]: time="2025-07-12T00:14:05.626502409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:05.675850 containerd[1485]: time="2025-07-12T00:14:05.675516094Z" level=info msg="CreateContainer within sandbox \"e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88d3bd7c687101676119c6c4fabb136bfa511ba3d12050c000e521d47c79d809\"" Jul 12 00:14:05.679354 containerd[1485]: time="2025-07-12T00:14:05.679214947Z" level=info msg="StartContainer for \"88d3bd7c687101676119c6c4fabb136bfa511ba3d12050c000e521d47c79d809\"" Jul 12 00:14:05.682017 systemd[1]: Started cri-containerd-f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9.scope - libcontainer container f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9. Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.205 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0 coredns-7c65d6cfc9- kube-system fd55186a-824e-478a-8f31-23c3f2558e58 982 0 2025-07-12 00:13:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-n-8926aa35a3 coredns-7c65d6cfc9-pq9dw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8e1bee9f988 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pq9dw" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.205 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pq9dw" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.306 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" HandleID="k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.312 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" HandleID="k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330140), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-n-8926aa35a3", "pod":"coredns-7c65d6cfc9-pq9dw", "timestamp":"2025-07-12 00:14:05.305968453 +0000 UTC"}, Hostname:"ci-4081-3-4-n-8926aa35a3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.312 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.461 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.462 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-n-8926aa35a3' Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.505 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.522 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.541 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.553 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.567 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.567 [INFO][4650] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.572 [INFO][4650] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996 Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.581 [INFO][4650] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.606 [INFO][4650] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.98.72/26] block=192.168.98.64/26 handle="k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.606 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.72/26] handle="k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" host="ci-4081-3-4-n-8926aa35a3" Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.606 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:05.701864 containerd[1485]: 2025-07-12 00:14:05.606 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.98.72/26] IPv6=[] ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" HandleID="k8s-pod-network.c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.702438 containerd[1485]: 2025-07-12 00:14:05.620 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pq9dw" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd55186a-824e-478a-8f31-23c3f2558e58", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"", Pod:"coredns-7c65d6cfc9-pq9dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e1bee9f988", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:05.702438 containerd[1485]: 2025-07-12 00:14:05.627 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.72/32] ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pq9dw" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.702438 containerd[1485]: 2025-07-12 00:14:05.627 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e1bee9f988 ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pq9dw" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.702438 containerd[1485]: 2025-07-12 00:14:05.653 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pq9dw" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.702438 containerd[1485]: 2025-07-12 00:14:05.653 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pq9dw" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd55186a-824e-478a-8f31-23c3f2558e58", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996", Pod:"coredns-7c65d6cfc9-pq9dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e1bee9f988", MAC:"46:1a:69:29:32:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:05.702438 containerd[1485]: 2025-07-12 00:14:05.680 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pq9dw" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:05.760161 containerd[1485]: time="2025-07-12T00:14:05.760008202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-694bc47746-lklc8,Uid:a5d34381-7fee-470d-b68b-74c007d52fdd,Namespace:calico-system,Attempt:1,} returns sandbox id \"123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1\"" Jul 12 00:14:05.793534 containerd[1485]: time="2025-07-12T00:14:05.786526810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:14:05.793534 containerd[1485]: time="2025-07-12T00:14:05.786584449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:14:05.793534 containerd[1485]: time="2025-07-12T00:14:05.786600289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:05.793534 containerd[1485]: time="2025-07-12T00:14:05.786675769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:14:05.826382 containerd[1485]: time="2025-07-12T00:14:05.826332841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twcjp,Uid:c9d45977-08ad-4a73-90ca-4efb866e9fdb,Namespace:calico-system,Attempt:1,} returns sandbox id \"f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9\"" Jul 12 00:14:05.830037 systemd[1]: Started cri-containerd-88d3bd7c687101676119c6c4fabb136bfa511ba3d12050c000e521d47c79d809.scope - libcontainer container 88d3bd7c687101676119c6c4fabb136bfa511ba3d12050c000e521d47c79d809. Jul 12 00:14:05.846099 systemd[1]: Started cri-containerd-c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996.scope - libcontainer container c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996. Jul 12 00:14:05.959141 containerd[1485]: time="2025-07-12T00:14:05.959096079Z" level=info msg="StartContainer for \"88d3bd7c687101676119c6c4fabb136bfa511ba3d12050c000e521d47c79d809\" returns successfully" Jul 12 00:14:05.961315 containerd[1485]: time="2025-07-12T00:14:05.960966385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pq9dw,Uid:fd55186a-824e-478a-8f31-23c3f2558e58,Namespace:kube-system,Attempt:1,} returns sandbox id \"c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996\"" Jul 12 00:14:05.967056 containerd[1485]: time="2025-07-12T00:14:05.967005901Z" level=info msg="CreateContainer within sandbox \"c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:14:06.024135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472355782.mount: Deactivated successfully. Jul 12 00:14:06.029184 containerd[1485]: time="2025-07-12T00:14:06.029066113Z" level=info msg="CreateContainer within sandbox \"c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44af1f73413b9d025dff8d385612e7a626b7d92d5c0427b12bd62b68d822864d\"" Jul 12 00:14:06.030487 containerd[1485]: time="2025-07-12T00:14:06.030067747Z" level=info msg="StartContainer for \"44af1f73413b9d025dff8d385612e7a626b7d92d5c0427b12bd62b68d822864d\"" Jul 12 00:14:06.086026 systemd[1]: Started cri-containerd-44af1f73413b9d025dff8d385612e7a626b7d92d5c0427b12bd62b68d822864d.scope - libcontainer container 44af1f73413b9d025dff8d385612e7a626b7d92d5c0427b12bd62b68d822864d. Jul 12 00:14:06.159933 containerd[1485]: time="2025-07-12T00:14:06.159172989Z" level=info msg="StartContainer for \"44af1f73413b9d025dff8d385612e7a626b7d92d5c0427b12bd62b68d822864d\" returns successfully" Jul 12 00:14:06.534776 systemd-networkd[1376]: cali7126c2999e2: Gained IPv6LL Jul 12 00:14:06.662860 systemd-networkd[1376]: calibd055cf51aa: Gained IPv6LL Jul 12 00:14:06.984953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401604274.mount: Deactivated successfully. Jul 12 00:14:07.018072 kubelet[2601]: I0712 00:14:07.017907 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rnjg4" podStartSLOduration=43.017473391 podStartE2EDuration="43.017473391s" podCreationTimestamp="2025-07-12 00:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:14:06.974636856 +0000 UTC m=+48.551629960" watchObservedRunningTime="2025-07-12 00:14:07.017473391 +0000 UTC m=+48.594466495" Jul 12 00:14:07.237939 systemd-networkd[1376]: cali16073ae89ee: Gained IPv6LL Jul 12 00:14:07.532201 containerd[1485]: time="2025-07-12T00:14:07.531416753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:07.533860 containerd[1485]: time="2025-07-12T00:14:07.533623460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 12 00:14:07.536838 containerd[1485]: time="2025-07-12T00:14:07.535908767Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:07.544073 containerd[1485]: time="2025-07-12T00:14:07.543767722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.392705753s" Jul 12 00:14:07.544328 containerd[1485]: time="2025-07-12T00:14:07.544304839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 12 00:14:07.544668 containerd[1485]: time="2025-07-12T00:14:07.544225599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:07.548422 containerd[1485]: time="2025-07-12T00:14:07.548374815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 12 00:14:07.556410 containerd[1485]: time="2025-07-12T00:14:07.554362541Z" level=info msg="CreateContainer within sandbox \"438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 12 00:14:07.584901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355283071.mount: Deactivated successfully. Jul 12 00:14:07.589449 containerd[1485]: time="2025-07-12T00:14:07.589408179Z" level=info msg="CreateContainer within sandbox \"438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f745f12a4831f628481f2081055f335bfd8bfdc1f675308569619cef435f3d1f\"" Jul 12 00:14:07.591310 containerd[1485]: time="2025-07-12T00:14:07.590935570Z" level=info msg="StartContainer for \"f745f12a4831f628481f2081055f335bfd8bfdc1f675308569619cef435f3d1f\"" Jul 12 00:14:07.622771 systemd-networkd[1376]: cali8e1bee9f988: Gained IPv6LL Jul 12 00:14:07.677183 systemd[1]: Started cri-containerd-f745f12a4831f628481f2081055f335bfd8bfdc1f675308569619cef435f3d1f.scope - libcontainer container f745f12a4831f628481f2081055f335bfd8bfdc1f675308569619cef435f3d1f. Jul 12 00:14:07.793716 containerd[1485]: time="2025-07-12T00:14:07.793348245Z" level=info msg="StartContainer for \"f745f12a4831f628481f2081055f335bfd8bfdc1f675308569619cef435f3d1f\" returns successfully" Jul 12 00:14:07.984875 kubelet[2601]: I0712 00:14:07.984721 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pq9dw" podStartSLOduration=43.984702624 podStartE2EDuration="43.984702624s" podCreationTimestamp="2025-07-12 00:13:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:14:07.048757691 +0000 UTC m=+48.625750755" watchObservedRunningTime="2025-07-12 00:14:07.984702624 +0000 UTC m=+49.561695688" Jul 12 00:14:07.985064 kubelet[2601]: I0712 00:14:07.984996 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-25c9q" podStartSLOduration=23.589286568 podStartE2EDuration="26.984990942s" podCreationTimestamp="2025-07-12 00:13:41 +0000 UTC" firstStartedPulling="2025-07-12 00:14:04.150621373 +0000 UTC m=+45.727614477" lastFinishedPulling="2025-07-12 00:14:07.546325747 +0000 UTC m=+49.123318851" observedRunningTime="2025-07-12 00:14:07.983970188 +0000 UTC m=+49.560963252" watchObservedRunningTime="2025-07-12 00:14:07.984990942 +0000 UTC m=+49.561984046" Jul 12 00:14:08.993997 systemd[1]: run-containerd-runc-k8s.io-f745f12a4831f628481f2081055f335bfd8bfdc1f675308569619cef435f3d1f-runc.2ZRFLc.mount: Deactivated successfully. Jul 12 00:14:09.244369 kubelet[2601]: I0712 00:14:09.244036 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:14:11.365473 containerd[1485]: time="2025-07-12T00:14:11.364707502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:11.366604 containerd[1485]: time="2025-07-12T00:14:11.366566257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 12 00:14:11.367763 containerd[1485]: time="2025-07-12T00:14:11.367711573Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:11.371698 containerd[1485]: time="2025-07-12T00:14:11.370965643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:11.372087 containerd[1485]: time="2025-07-12T00:14:11.372046800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.821580037s" Jul 12 00:14:11.372249 containerd[1485]: time="2025-07-12T00:14:11.372214640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 12 00:14:11.374533 containerd[1485]: time="2025-07-12T00:14:11.374513553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 12 00:14:11.394587 containerd[1485]: time="2025-07-12T00:14:11.394470252Z" level=info msg="CreateContainer within sandbox \"123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 12 00:14:11.418996 containerd[1485]: time="2025-07-12T00:14:11.418932658Z" level=info msg="CreateContainer within sandbox \"123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4cfc9cc73e9bc3c38943810948cd825d2c8dc38f5030aee36df8d554a6a45a9d\"" Jul 12 00:14:11.421457 containerd[1485]: time="2025-07-12T00:14:11.421414610Z" level=info msg="StartContainer for \"4cfc9cc73e9bc3c38943810948cd825d2c8dc38f5030aee36df8d554a6a45a9d\"" Jul 12 00:14:11.479197 systemd[1]: Started cri-containerd-4cfc9cc73e9bc3c38943810948cd825d2c8dc38f5030aee36df8d554a6a45a9d.scope - libcontainer container 4cfc9cc73e9bc3c38943810948cd825d2c8dc38f5030aee36df8d554a6a45a9d. Jul 12 00:14:11.560628 containerd[1485]: time="2025-07-12T00:14:11.560476828Z" level=info msg="StartContainer for \"4cfc9cc73e9bc3c38943810948cd825d2c8dc38f5030aee36df8d554a6a45a9d\" returns successfully" Jul 12 00:14:12.094975 kubelet[2601]: I0712 00:14:12.094900 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-694bc47746-lklc8" podStartSLOduration=25.485896996 podStartE2EDuration="31.094883103s" podCreationTimestamp="2025-07-12 00:13:41 +0000 UTC" firstStartedPulling="2025-07-12 00:14:05.764537289 +0000 UTC m=+47.341530393" lastFinishedPulling="2025-07-12 00:14:11.373523396 +0000 UTC m=+52.950516500" observedRunningTime="2025-07-12 00:14:11.996993101 +0000 UTC m=+53.573986205" watchObservedRunningTime="2025-07-12 00:14:12.094883103 +0000 UTC m=+53.671876167" Jul 12 00:14:12.786437 containerd[1485]: time="2025-07-12T00:14:12.786319875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:12.788513 containerd[1485]: time="2025-07-12T00:14:12.788463070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 12 00:14:12.789154 containerd[1485]: time="2025-07-12T00:14:12.788871589Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:12.791669 containerd[1485]: time="2025-07-12T00:14:12.791633022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:12.792566 containerd[1485]: time="2025-07-12T00:14:12.792523820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.417897188s" Jul 12 00:14:12.792646 containerd[1485]: time="2025-07-12T00:14:12.792564340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 12 00:14:12.795558 containerd[1485]: time="2025-07-12T00:14:12.795527373Z" level=info msg="CreateContainer within sandbox \"f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 12 00:14:12.818063 containerd[1485]: time="2025-07-12T00:14:12.818003359Z" level=info msg="CreateContainer within sandbox \"f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d595c83d9ee3b3b118547c570415d57c47514b408d13388aa7c553c1fb26d6b8\"" Jul 12 00:14:12.820596 containerd[1485]: time="2025-07-12T00:14:12.818931276Z" level=info msg="StartContainer for \"d595c83d9ee3b3b118547c570415d57c47514b408d13388aa7c553c1fb26d6b8\"" Jul 12 00:14:12.865072 systemd[1]: Started cri-containerd-d595c83d9ee3b3b118547c570415d57c47514b408d13388aa7c553c1fb26d6b8.scope - libcontainer container d595c83d9ee3b3b118547c570415d57c47514b408d13388aa7c553c1fb26d6b8. Jul 12 00:14:12.902126 containerd[1485]: time="2025-07-12T00:14:12.902082756Z" level=info msg="StartContainer for \"d595c83d9ee3b3b118547c570415d57c47514b408d13388aa7c553c1fb26d6b8\" returns successfully" Jul 12 00:14:12.904866 containerd[1485]: time="2025-07-12T00:14:12.904832229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 12 00:14:14.415131 containerd[1485]: time="2025-07-12T00:14:14.415075328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:14.415131 containerd[1485]: time="2025-07-12T00:14:14.415829527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 12 00:14:14.417265 containerd[1485]: time="2025-07-12T00:14:14.416934326Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:14.424822 containerd[1485]: time="2025-07-12T00:14:14.423486798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:14:14.425514 containerd[1485]: time="2025-07-12T00:14:14.425475796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.520412407s" Jul 12 00:14:14.425620 containerd[1485]: time="2025-07-12T00:14:14.425604115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 12 00:14:14.431039 containerd[1485]: time="2025-07-12T00:14:14.430994709Z" level=info msg="CreateContainer within sandbox \"f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 12 00:14:14.458697 containerd[1485]: time="2025-07-12T00:14:14.458635715Z" level=info msg="CreateContainer within sandbox \"f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"deb9f4ea8dcb6d76e5e3aed1a30669118326c96c9e93825a509e8a183254d78e\"" Jul 12 00:14:14.459864 containerd[1485]: time="2025-07-12T00:14:14.459770114Z" level=info msg="StartContainer for \"deb9f4ea8dcb6d76e5e3aed1a30669118326c96c9e93825a509e8a183254d78e\"" Jul 12 00:14:14.516346 systemd[1]: Started cri-containerd-deb9f4ea8dcb6d76e5e3aed1a30669118326c96c9e93825a509e8a183254d78e.scope - libcontainer container deb9f4ea8dcb6d76e5e3aed1a30669118326c96c9e93825a509e8a183254d78e. Jul 12 00:14:14.552446 containerd[1485]: time="2025-07-12T00:14:14.552371721Z" level=info msg="StartContainer for \"deb9f4ea8dcb6d76e5e3aed1a30669118326c96c9e93825a509e8a183254d78e\" returns successfully" Jul 12 00:14:15.720671 kubelet[2601]: I0712 00:14:15.720520 2601 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 12 00:14:15.720671 kubelet[2601]: I0712 00:14:15.720665 2601 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 12 00:14:18.565906 containerd[1485]: time="2025-07-12T00:14:18.565489018Z" level=info msg="StopPodSandbox for \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\"" Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.623 [WARNING][5215] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d45977-08ad-4a73-90ca-4efb866e9fdb", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9", Pod:"csi-node-driver-twcjp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16073ae89ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.623 [INFO][5215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.623 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" iface="eth0" netns="" Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.623 [INFO][5215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.623 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.663 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.664 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.664 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.680 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.681 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.684 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:18.689313 containerd[1485]: 2025-07-12 00:14:18.686 [INFO][5215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:18.690472 containerd[1485]: time="2025-07-12T00:14:18.689335896Z" level=info msg="TearDown network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\" successfully" Jul 12 00:14:18.690472 containerd[1485]: time="2025-07-12T00:14:18.689362056Z" level=info msg="StopPodSandbox for \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\" returns successfully" Jul 12 00:14:18.690472 containerd[1485]: time="2025-07-12T00:14:18.690018297Z" level=info msg="RemovePodSandbox for \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\"" Jul 12 00:14:18.697705 containerd[1485]: time="2025-07-12T00:14:18.697374544Z" level=info msg="Forcibly stopping sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\"" Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.758 [WARNING][5238] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c9d45977-08ad-4a73-90ca-4efb866e9fdb", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"f592c5e28a59b08debf2bdd94e1ae5bec66a197c584c10e0f733b90615cab3c9", Pod:"csi-node-driver-twcjp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali16073ae89ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.759 [INFO][5238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.759 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" iface="eth0" netns="" Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.759 [INFO][5238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.759 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.786 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.786 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.786 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.805 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.805 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" HandleID="k8s-pod-network.233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Workload="ci--4081--3--4--n--8926aa35a3-k8s-csi--node--driver--twcjp-eth0" Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.808 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:18.812562 containerd[1485]: 2025-07-12 00:14:18.810 [INFO][5238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61" Jul 12 00:14:18.814450 containerd[1485]: time="2025-07-12T00:14:18.813067775Z" level=info msg="TearDown network for sandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\" successfully" Jul 12 00:14:18.817619 containerd[1485]: time="2025-07-12T00:14:18.817070459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:14:18.817619 containerd[1485]: time="2025-07-12T00:14:18.817166259Z" level=info msg="RemovePodSandbox \"233e984ca1c02f9f6680110362a180f7cfcadf78ccd28853e67180c1f81d0c61\" returns successfully" Jul 12 00:14:18.817887 containerd[1485]: time="2025-07-12T00:14:18.817761540Z" level=info msg="StopPodSandbox for \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\"" Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.880 [WARNING][5259] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0", GenerateName:"calico-apiserver-5bcd7b9b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcd7b9b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139", Pod:"calico-apiserver-5bcd7b9b6d-97z8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief19aedaaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.886 [INFO][5259] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.886 [INFO][5259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" iface="eth0" netns="" Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.886 [INFO][5259] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.886 [INFO][5259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.930 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.930 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.931 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.958 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.958 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.979 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:18.997441 containerd[1485]: 2025-07-12 00:14:18.991 [INFO][5259] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:18.999366 containerd[1485]: time="2025-07-12T00:14:18.997486792Z" level=info msg="TearDown network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\" successfully" Jul 12 00:14:18.999366 containerd[1485]: time="2025-07-12T00:14:18.997515032Z" level=info msg="StopPodSandbox for \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\" returns successfully" Jul 12 00:14:18.999366 containerd[1485]: time="2025-07-12T00:14:18.998194473Z" level=info msg="RemovePodSandbox for \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\"" Jul 12 00:14:18.999366 containerd[1485]: time="2025-07-12T00:14:18.998228753Z" level=info msg="Forcibly stopping sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\"" Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.071 [WARNING][5313] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0", GenerateName:"calico-apiserver-5bcd7b9b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0cd4f15-360a-4b9e-86d5-922d2b6ce0b5", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcd7b9b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"2cf33ac62a39c5592d33700916d8c389b30c82d612fb6a830d29d3f4d1c03139", Pod:"calico-apiserver-5bcd7b9b6d-97z8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief19aedaaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.071 [INFO][5313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.071 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" iface="eth0" netns="" Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.071 [INFO][5313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.071 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.110 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.110 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.110 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.122 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.122 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" HandleID="k8s-pod-network.fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--97z8q-eth0" Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.124 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:19.129051 containerd[1485]: 2025-07-12 00:14:19.126 [INFO][5313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a" Jul 12 00:14:19.130866 containerd[1485]: time="2025-07-12T00:14:19.129032182Z" level=info msg="TearDown network for sandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\" successfully" Jul 12 00:14:19.135967 containerd[1485]: time="2025-07-12T00:14:19.135911312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:14:19.136125 containerd[1485]: time="2025-07-12T00:14:19.135997633Z" level=info msg="RemovePodSandbox \"fdfc6a76d656e906a8d813c659aebe3894168268dd6194720927bd7d30bf881a\" returns successfully" Jul 12 00:14:19.136533 containerd[1485]: time="2025-07-12T00:14:19.136496913Z" level=info msg="StopPodSandbox for \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\"" Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.207 [WARNING][5335] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0", GenerateName:"calico-kube-controllers-694bc47746-", Namespace:"calico-system", SelfLink:"", UID:"a5d34381-7fee-470d-b68b-74c007d52fdd", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"694bc47746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1", Pod:"calico-kube-controllers-694bc47746-lklc8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd055cf51aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.208 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.208 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" iface="eth0" netns="" Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.208 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.208 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.243 [INFO][5342] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.245 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.245 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.255 [WARNING][5342] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.255 [INFO][5342] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.257 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:19.262114 containerd[1485]: 2025-07-12 00:14:19.260 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:19.262994 containerd[1485]: time="2025-07-12T00:14:19.262186697Z" level=info msg="TearDown network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\" successfully" Jul 12 00:14:19.262994 containerd[1485]: time="2025-07-12T00:14:19.262213297Z" level=info msg="StopPodSandbox for \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\" returns successfully" Jul 12 00:14:19.263830 containerd[1485]: time="2025-07-12T00:14:19.263736619Z" level=info msg="RemovePodSandbox for \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\"" Jul 12 00:14:19.263964 containerd[1485]: time="2025-07-12T00:14:19.263842539Z" level=info msg="Forcibly stopping sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\"" Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.341 [WARNING][5356] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0", GenerateName:"calico-kube-controllers-694bc47746-", Namespace:"calico-system", SelfLink:"", UID:"a5d34381-7fee-470d-b68b-74c007d52fdd", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"694bc47746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"123bfc0358fbf19180150a0ce6fa1d39a3a73e8e9a3081e5dac2c035ec86b1a1", Pod:"calico-kube-controllers-694bc47746-lklc8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd055cf51aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.342 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.342 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" iface="eth0" netns="" Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.342 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.342 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.384 [INFO][5370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.384 [INFO][5370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.384 [INFO][5370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.397 [WARNING][5370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.398 [INFO][5370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" HandleID="k8s-pod-network.72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--kube--controllers--694bc47746--lklc8-eth0" Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.399 [INFO][5370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:19.403141 containerd[1485]: 2025-07-12 00:14:19.401 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea" Jul 12 00:14:19.404026 containerd[1485]: time="2025-07-12T00:14:19.403980344Z" level=info msg="TearDown network for sandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\" successfully" Jul 12 00:14:19.409658 containerd[1485]: time="2025-07-12T00:14:19.409190512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:14:19.409861 containerd[1485]: time="2025-07-12T00:14:19.409681272Z" level=info msg="RemovePodSandbox \"72c82ce70190c0c9a7bf57e5fb3f74900ec25162452bc5964ba8c68f68ad22ea\" returns successfully" Jul 12 00:14:19.410979 containerd[1485]: time="2025-07-12T00:14:19.410729234Z" level=info msg="StopPodSandbox for \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\"" Jul 12 00:14:19.516763 kubelet[2601]: I0712 00:14:19.514652 2601 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-twcjp" podStartSLOduration=29.929014041 podStartE2EDuration="38.514634306s" podCreationTimestamp="2025-07-12 00:13:41 +0000 UTC" firstStartedPulling="2025-07-12 00:14:05.841870568 +0000 UTC m=+47.418863672" lastFinishedPulling="2025-07-12 00:14:14.427490833 +0000 UTC m=+56.004483937" observedRunningTime="2025-07-12 00:14:15.017346965 +0000 UTC m=+56.594340069" watchObservedRunningTime="2025-07-12 00:14:19.514634306 +0000 UTC m=+61.091627410" Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.470 [WARNING][5385] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca", Pod:"goldmane-58fd7646b9-25c9q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5cbb56a0e33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.470 [INFO][5385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.470 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" iface="eth0" netns="" Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.470 [INFO][5385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.470 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.504 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.504 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.504 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.522 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.522 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.526 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:19.531713 containerd[1485]: 2025-07-12 00:14:19.528 [INFO][5385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:19.532609 containerd[1485]: time="2025-07-12T00:14:19.531732691Z" level=info msg="TearDown network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\" successfully" Jul 12 00:14:19.532609 containerd[1485]: time="2025-07-12T00:14:19.531758291Z" level=info msg="StopPodSandbox for \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\" returns successfully" Jul 12 00:14:19.533525 containerd[1485]: time="2025-07-12T00:14:19.533401973Z" level=info msg="RemovePodSandbox for \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\"" Jul 12 00:14:19.533525 containerd[1485]: time="2025-07-12T00:14:19.533508293Z" level=info msg="Forcibly stopping sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\"" Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.583 [WARNING][5407] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8dcaf788-c931-4d5f-8dbb-aa867fceaa4c", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"438f7a642997d67f5f2f2cfd4566353a4979d2ef2feccc5650bd7208543945ca", Pod:"goldmane-58fd7646b9-25c9q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5cbb56a0e33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.583 [INFO][5407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.583 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" iface="eth0" netns="" Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.583 [INFO][5407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.584 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.615 [INFO][5415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.615 [INFO][5415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.615 [INFO][5415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.630 [WARNING][5415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.630 [INFO][5415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" HandleID="k8s-pod-network.626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Workload="ci--4081--3--4--n--8926aa35a3-k8s-goldmane--58fd7646b9--25c9q-eth0" Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.632 [INFO][5415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:19.636481 containerd[1485]: 2025-07-12 00:14:19.634 [INFO][5407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f" Jul 12 00:14:19.637625 containerd[1485]: time="2025-07-12T00:14:19.636504364Z" level=info msg="TearDown network for sandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\" successfully" Jul 12 00:14:19.642587 containerd[1485]: time="2025-07-12T00:14:19.642400212Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:14:19.642731 containerd[1485]: time="2025-07-12T00:14:19.642634453Z" level=info msg="RemovePodSandbox \"626611ef8394b57012735e58461f0a9ce47db302fb0fe778fe4651940921095f\" returns successfully" Jul 12 00:14:19.643740 containerd[1485]: time="2025-07-12T00:14:19.643683214Z" level=info msg="StopPodSandbox for \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\"" Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.687 [WARNING][5430] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f208ae55-8f4f-4d14-a26a-564c62f2524f", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228", Pod:"coredns-7c65d6cfc9-rnjg4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7126c2999e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.687 [INFO][5430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.687 [INFO][5430] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" iface="eth0" netns="" Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.688 [INFO][5430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.688 [INFO][5430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.715 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.715 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.715 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.727 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.727 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.729 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:19.734012 containerd[1485]: 2025-07-12 00:14:19.731 [INFO][5430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:19.734012 containerd[1485]: time="2025-07-12T00:14:19.733764906Z" level=info msg="TearDown network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\" successfully" Jul 12 00:14:19.734012 containerd[1485]: time="2025-07-12T00:14:19.733801186Z" level=info msg="StopPodSandbox for \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\" returns successfully" Jul 12 00:14:19.735131 containerd[1485]: time="2025-07-12T00:14:19.735078148Z" level=info msg="RemovePodSandbox for \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\"" Jul 12 00:14:19.735253 containerd[1485]: time="2025-07-12T00:14:19.735138348Z" level=info msg="Forcibly stopping sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\"" Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.780 [WARNING][5452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f208ae55-8f4f-4d14-a26a-564c62f2524f", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"e40ab5be7e8ff84b2787efbc3bd85a838c35c056587726b6153cd3821eae9228", Pod:"coredns-7c65d6cfc9-rnjg4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7126c2999e2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.780 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.780 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" iface="eth0" netns="" Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.780 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.780 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.813 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.813 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.813 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.823 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.823 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" HandleID="k8s-pod-network.aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--rnjg4-eth0" Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.825 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:19.829980 containerd[1485]: 2025-07-12 00:14:19.827 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63" Jul 12 00:14:19.830736 containerd[1485]: time="2025-07-12T00:14:19.830219007Z" level=info msg="TearDown network for sandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\" successfully" Jul 12 00:14:19.837997 containerd[1485]: time="2025-07-12T00:14:19.837867538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:14:19.838233 containerd[1485]: time="2025-07-12T00:14:19.838020498Z" level=info msg="RemovePodSandbox \"aa70169075ed421e33bf863baaf93ede34b7b0da35f1ac62a7191c8bd6564a63\" returns successfully" Jul 12 00:14:19.838965 containerd[1485]: time="2025-07-12T00:14:19.838925699Z" level=info msg="StopPodSandbox for \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\"" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.900 [WARNING][5474] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.901 [INFO][5474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.901 [INFO][5474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" iface="eth0" netns="" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.901 [INFO][5474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.901 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.926 [INFO][5481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.926 [INFO][5481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.926 [INFO][5481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.938 [WARNING][5481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.939 [INFO][5481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.941 [INFO][5481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:19.949243 containerd[1485]: 2025-07-12 00:14:19.946 [INFO][5474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:14:19.951643 containerd[1485]: time="2025-07-12T00:14:19.949278261Z" level=info msg="TearDown network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\" successfully" Jul 12 00:14:19.951643 containerd[1485]: time="2025-07-12T00:14:19.949306661Z" level=info msg="StopPodSandbox for \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\" returns successfully" Jul 12 00:14:19.952229 containerd[1485]: time="2025-07-12T00:14:19.952022545Z" level=info msg="RemovePodSandbox for \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\"" Jul 12 00:14:19.952229 containerd[1485]: time="2025-07-12T00:14:19.952128105Z" level=info msg="Forcibly stopping sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\"" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.008 [WARNING][5495] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" WorkloadEndpoint="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.008 [INFO][5495] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.008 [INFO][5495] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" iface="eth0" netns="" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.008 [INFO][5495] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.008 [INFO][5495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.050 [INFO][5503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.050 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.051 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.067 [WARNING][5503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.067 [INFO][5503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" HandleID="k8s-pod-network.75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Workload="ci--4081--3--4--n--8926aa35a3-k8s-whisker--674cf9c754--qpdxf-eth0" Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.070 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:20.076195 containerd[1485]: 2025-07-12 00:14:20.073 [INFO][5495] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3" Jul 12 00:14:20.077026 containerd[1485]: time="2025-07-12T00:14:20.076307123Z" level=info msg="TearDown network for sandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\" successfully" Jul 12 00:14:20.082315 containerd[1485]: time="2025-07-12T00:14:20.082248374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:14:20.082620 containerd[1485]: time="2025-07-12T00:14:20.082499255Z" level=info msg="RemovePodSandbox \"75e3725000fea12e778500f2d15b2bce5fa67efe979339d3ef782a15254c6df3\" returns successfully" Jul 12 00:14:20.083823 containerd[1485]: time="2025-07-12T00:14:20.083775417Z" level=info msg="StopPodSandbox for \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\"" Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.136 [WARNING][5517] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd55186a-824e-478a-8f31-23c3f2558e58", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996", Pod:"coredns-7c65d6cfc9-pq9dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e1bee9f988", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.136 [INFO][5517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.136 [INFO][5517] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" iface="eth0" netns="" Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.136 [INFO][5517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.137 [INFO][5517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.164 [INFO][5524] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.164 [INFO][5524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.164 [INFO][5524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.174 [WARNING][5524] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.174 [INFO][5524] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.176 [INFO][5524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:20.181217 containerd[1485]: 2025-07-12 00:14:20.178 [INFO][5517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:20.181217 containerd[1485]: time="2025-07-12T00:14:20.180602046Z" level=info msg="TearDown network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\" successfully" Jul 12 00:14:20.181217 containerd[1485]: time="2025-07-12T00:14:20.180626366Z" level=info msg="StopPodSandbox for \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\" returns successfully" Jul 12 00:14:20.182041 containerd[1485]: time="2025-07-12T00:14:20.181978569Z" level=info msg="RemovePodSandbox for \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\"" Jul 12 00:14:20.182041 containerd[1485]: time="2025-07-12T00:14:20.182015969Z" level=info msg="Forcibly stopping sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\"" Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.229 [WARNING][5538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fd55186a-824e-478a-8f31-23c3f2558e58", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"c0ea8d557275a4be8f096aff2d3a540ac3a0dc513fad45c58d5d5f72fe3b2996", Pod:"coredns-7c65d6cfc9-pq9dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8e1bee9f988", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.230 [INFO][5538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.230 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" iface="eth0" netns="" Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.230 [INFO][5538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.230 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.254 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.254 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.254 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.267 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.268 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" HandleID="k8s-pod-network.75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Workload="ci--4081--3--4--n--8926aa35a3-k8s-coredns--7c65d6cfc9--pq9dw-eth0" Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.270 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:20.274415 containerd[1485]: 2025-07-12 00:14:20.272 [INFO][5538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7" Jul 12 00:14:20.276126 containerd[1485]: time="2025-07-12T00:14:20.274470509Z" level=info msg="TearDown network for sandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\" successfully" Jul 12 00:14:20.278143 containerd[1485]: time="2025-07-12T00:14:20.278041476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:14:20.278239 containerd[1485]: time="2025-07-12T00:14:20.278189756Z" level=info msg="RemovePodSandbox \"75db158ef0d5c2e16b6187da36a2d3259de883b6407ecbddc087a38d31b718a7\" returns successfully" Jul 12 00:14:20.278786 containerd[1485]: time="2025-07-12T00:14:20.278754477Z" level=info msg="StopPodSandbox for \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\"" Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.321 [WARNING][5559] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0", GenerateName:"calico-apiserver-5bcd7b9b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"263f7977-4a38-4140-adcf-d1a6d16328ea", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcd7b9b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828", Pod:"calico-apiserver-5bcd7b9b6d-g2glf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali178da9e3224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.321 [INFO][5559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.321 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" iface="eth0" netns="" Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.321 [INFO][5559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.321 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.350 [INFO][5566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.350 [INFO][5566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.350 [INFO][5566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.363 [WARNING][5566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.363 [INFO][5566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.366 [INFO][5566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:20.371729 containerd[1485]: 2025-07-12 00:14:20.367 [INFO][5559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:20.371729 containerd[1485]: time="2025-07-12T00:14:20.371674418Z" level=info msg="TearDown network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\" successfully" Jul 12 00:14:20.371729 containerd[1485]: time="2025-07-12T00:14:20.371697978Z" level=info msg="StopPodSandbox for \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\" returns successfully" Jul 12 00:14:20.373157 containerd[1485]: time="2025-07-12T00:14:20.372417659Z" level=info msg="RemovePodSandbox for \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\"" Jul 12 00:14:20.373157 containerd[1485]: time="2025-07-12T00:14:20.372458500Z" level=info msg="Forcibly stopping sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\"" Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.424 [WARNING][5580] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0", GenerateName:"calico-apiserver-5bcd7b9b6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"263f7977-4a38-4140-adcf-d1a6d16328ea", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 12, 0, 13, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bcd7b9b6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-n-8926aa35a3", ContainerID:"eb43543a1a50f6677c13206df8f1ac0a55730c9dd6beebd063a70ae329ce4828", Pod:"calico-apiserver-5bcd7b9b6d-g2glf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali178da9e3224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.427 [INFO][5580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.427 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" iface="eth0" netns="" Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.427 [INFO][5580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.427 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.458 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.458 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.458 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.470 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.470 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" HandleID="k8s-pod-network.b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Workload="ci--4081--3--4--n--8926aa35a3-k8s-calico--apiserver--5bcd7b9b6d--g2glf-eth0" Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.472 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 12 00:14:20.477747 containerd[1485]: 2025-07-12 00:14:20.475 [INFO][5580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207" Jul 12 00:14:20.478410 containerd[1485]: time="2025-07-12T00:14:20.477857905Z" level=info msg="TearDown network for sandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\" successfully" Jul 12 00:14:20.484210 containerd[1485]: time="2025-07-12T00:14:20.484152597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:14:20.484338 containerd[1485]: time="2025-07-12T00:14:20.484238317Z" level=info msg="RemovePodSandbox \"b7e48af3a34eee367a406405f1aa7d124c4161901870fcf0f608fc4bcbbca207\" returns successfully" Jul 12 00:14:44.011817 kubelet[2601]: I0712 00:14:44.009534 2601 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:15:17.986211 systemd[1]: Started sshd@7-91.99.220.16:22-139.178.68.195:38748.service - OpenSSH per-connection server daemon (139.178.68.195:38748). Jul 12 00:15:18.988906 sshd[5744]: Accepted publickey for core from 139.178.68.195 port 38748 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:18.993025 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:19.003780 systemd-logind[1460]: New session 8 of user core. Jul 12 00:15:19.011234 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:15:19.769499 sshd[5744]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:19.773035 systemd[1]: sshd@7-91.99.220.16:22-139.178.68.195:38748.service: Deactivated successfully. Jul 12 00:15:19.775599 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:15:19.778233 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:15:19.779526 systemd-logind[1460]: Removed session 8. Jul 12 00:15:24.940764 systemd[1]: Started sshd@8-91.99.220.16:22-139.178.68.195:37784.service - OpenSSH per-connection server daemon (139.178.68.195:37784). Jul 12 00:15:25.924821 sshd[5815]: Accepted publickey for core from 139.178.68.195 port 37784 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:25.927785 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:25.935539 systemd-logind[1460]: New session 9 of user core. Jul 12 00:15:25.939046 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:15:26.704363 sshd[5815]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:26.710429 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:15:26.710679 systemd[1]: sshd@8-91.99.220.16:22-139.178.68.195:37784.service: Deactivated successfully. Jul 12 00:15:26.717175 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:15:26.718955 systemd-logind[1460]: Removed session 9. Jul 12 00:15:31.881302 systemd[1]: Started sshd@9-91.99.220.16:22-139.178.68.195:38456.service - OpenSSH per-connection server daemon (139.178.68.195:38456). Jul 12 00:15:32.858022 sshd[5861]: Accepted publickey for core from 139.178.68.195 port 38456 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:32.860934 sshd[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:32.871854 systemd-logind[1460]: New session 10 of user core. Jul 12 00:15:32.879075 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:15:33.622300 sshd[5861]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:33.627871 systemd[1]: sshd@9-91.99.220.16:22-139.178.68.195:38456.service: Deactivated successfully. Jul 12 00:15:33.631174 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:15:33.632561 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:15:33.634011 systemd-logind[1460]: Removed session 10. Jul 12 00:15:33.805123 systemd[1]: Started sshd@10-91.99.220.16:22-139.178.68.195:38460.service - OpenSSH per-connection server daemon (139.178.68.195:38460). Jul 12 00:15:34.813826 sshd[5897]: Accepted publickey for core from 139.178.68.195 port 38460 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:34.815308 sshd[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:34.823618 systemd-logind[1460]: New session 11 of user core. Jul 12 00:15:34.828061 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:15:35.646572 sshd[5897]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:35.650910 systemd[1]: sshd@10-91.99.220.16:22-139.178.68.195:38460.service: Deactivated successfully. Jul 12 00:15:35.653941 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:15:35.658452 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:15:35.662180 systemd-logind[1460]: Removed session 11. Jul 12 00:15:35.826384 systemd[1]: Started sshd@11-91.99.220.16:22-139.178.68.195:38470.service - OpenSSH per-connection server daemon (139.178.68.195:38470). Jul 12 00:15:36.823904 sshd[5926]: Accepted publickey for core from 139.178.68.195 port 38470 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:36.825883 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:36.831967 systemd-logind[1460]: New session 12 of user core. Jul 12 00:15:36.834508 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:15:37.601007 sshd[5926]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:37.605749 systemd[1]: sshd@11-91.99.220.16:22-139.178.68.195:38470.service: Deactivated successfully. Jul 12 00:15:37.609530 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:15:37.611482 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:15:37.612913 systemd-logind[1460]: Removed session 12. Jul 12 00:15:42.779194 systemd[1]: Started sshd@12-91.99.220.16:22-139.178.68.195:40512.service - OpenSSH per-connection server daemon (139.178.68.195:40512). Jul 12 00:15:43.779814 sshd[5940]: Accepted publickey for core from 139.178.68.195 port 40512 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:43.782437 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:43.790899 systemd-logind[1460]: New session 13 of user core. Jul 12 00:15:43.795005 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:15:44.561718 sshd[5940]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:44.567094 systemd[1]: sshd@12-91.99.220.16:22-139.178.68.195:40512.service: Deactivated successfully. Jul 12 00:15:44.571230 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:15:44.573155 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:15:44.574832 systemd-logind[1460]: Removed session 13. Jul 12 00:15:44.766112 systemd[1]: Started sshd@13-91.99.220.16:22-139.178.68.195:40524.service - OpenSSH per-connection server daemon (139.178.68.195:40524). Jul 12 00:15:45.825537 sshd[5953]: Accepted publickey for core from 139.178.68.195 port 40524 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:45.827968 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:45.832813 systemd-logind[1460]: New session 14 of user core. Jul 12 00:15:45.842537 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:15:46.852022 sshd[5953]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:46.861302 systemd[1]: sshd@13-91.99.220.16:22-139.178.68.195:40524.service: Deactivated successfully. Jul 12 00:15:46.861501 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:15:46.866254 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:15:46.869653 systemd-logind[1460]: Removed session 14. Jul 12 00:15:47.021139 systemd[1]: Started sshd@14-91.99.220.16:22-139.178.68.195:40540.service - OpenSSH per-connection server daemon (139.178.68.195:40540). Jul 12 00:15:48.025883 sshd[5964]: Accepted publickey for core from 139.178.68.195 port 40540 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:48.027756 sshd[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:48.035878 systemd-logind[1460]: New session 15 of user core. Jul 12 00:15:48.042074 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:15:48.993056 systemd[1]: run-containerd-runc-k8s.io-4cfc9cc73e9bc3c38943810948cd825d2c8dc38f5030aee36df8d554a6a45a9d-runc.Y31Ps7.mount: Deactivated successfully. Jul 12 00:15:51.214544 sshd[5964]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:51.220140 systemd[1]: sshd@14-91.99.220.16:22-139.178.68.195:40540.service: Deactivated successfully. Jul 12 00:15:51.224131 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:15:51.225989 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:15:51.227445 systemd-logind[1460]: Removed session 15. Jul 12 00:15:51.395154 systemd[1]: Started sshd@15-91.99.220.16:22-139.178.68.195:38278.service - OpenSSH per-connection server daemon (139.178.68.195:38278). Jul 12 00:15:52.406028 sshd[6022]: Accepted publickey for core from 139.178.68.195 port 38278 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:52.409776 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:52.415602 systemd-logind[1460]: New session 16 of user core. Jul 12 00:15:52.422049 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:15:53.365485 sshd[6022]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:53.369251 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:15:53.369985 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:15:53.370772 systemd[1]: sshd@15-91.99.220.16:22-139.178.68.195:38278.service: Deactivated successfully. Jul 12 00:15:53.378881 systemd-logind[1460]: Removed session 16. Jul 12 00:15:53.537159 systemd[1]: Started sshd@16-91.99.220.16:22-139.178.68.195:38294.service - OpenSSH per-connection server daemon (139.178.68.195:38294). Jul 12 00:15:54.515467 sshd[6033]: Accepted publickey for core from 139.178.68.195 port 38294 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:54.519236 sshd[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:54.527979 systemd-logind[1460]: New session 17 of user core. Jul 12 00:15:54.534547 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:15:55.289014 sshd[6033]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:55.295610 systemd[1]: sshd@16-91.99.220.16:22-139.178.68.195:38294.service: Deactivated successfully. Jul 12 00:15:55.299587 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:15:55.302626 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:15:55.304693 systemd-logind[1460]: Removed session 17. Jul 12 00:16:00.468158 systemd[1]: Started sshd@17-91.99.220.16:22-139.178.68.195:60738.service - OpenSSH per-connection server daemon (139.178.68.195:60738). Jul 12 00:16:01.474939 sshd[6050]: Accepted publickey for core from 139.178.68.195 port 60738 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:16:01.478761 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:01.486511 systemd-logind[1460]: New session 18 of user core. Jul 12 00:16:01.493143 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:16:02.248613 sshd[6050]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:02.252991 systemd[1]: sshd@17-91.99.220.16:22-139.178.68.195:60738.service: Deactivated successfully. Jul 12 00:16:02.255076 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:16:02.257640 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:16:02.259334 systemd-logind[1460]: Removed session 18. Jul 12 00:16:07.423819 systemd[1]: Started sshd@18-91.99.220.16:22-139.178.68.195:60750.service - OpenSSH per-connection server daemon (139.178.68.195:60750). Jul 12 00:16:08.398215 sshd[6085]: Accepted publickey for core from 139.178.68.195 port 60750 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:16:08.400222 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:16:08.405546 systemd-logind[1460]: New session 19 of user core. Jul 12 00:16:08.410057 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:16:09.143262 sshd[6085]: pam_unix(sshd:session): session closed for user core Jul 12 00:16:09.148815 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:16:09.150018 systemd[1]: sshd@18-91.99.220.16:22-139.178.68.195:60750.service: Deactivated successfully. Jul 12 00:16:09.152048 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:16:09.152969 systemd-logind[1460]: Removed session 19.