Sep 13 00:01:52.882416 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:01:52.882438 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:01:52.882449 kernel: KASLR enabled Sep 13 00:01:52.882455 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 13 00:01:52.882461 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 13 00:01:52.882466 kernel: random: crng init done Sep 13 00:01:52.882473 kernel: ACPI: Early table checksum verification disabled Sep 13 00:01:52.882480 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 13 00:01:52.882486 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:01:52.882494 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882500 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882506 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882512 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882518 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882526 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882534 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882540 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882547 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:01:52.882553 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:01:52.882559 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 13 00:01:52.882566 kernel: NUMA: Failed to initialise from firmware Sep 13 00:01:52.882572 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:01:52.882578 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Sep 13 00:01:52.882584 kernel: Zone ranges: Sep 13 00:01:52.882591 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 13 00:01:52.882598 kernel: DMA32 empty Sep 13 00:01:52.882605 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 13 00:01:52.882611 kernel: Movable zone start for each node Sep 13 00:01:52.882617 kernel: Early memory node ranges Sep 13 00:01:52.882623 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 13 00:01:52.882630 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 13 00:01:52.882636 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 13 00:01:52.882643 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 13 00:01:52.882649 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 13 00:01:52.882655 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 13 00:01:52.882661 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 13 00:01:52.882668 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:01:52.882675 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 13 00:01:52.882682 kernel: psci: probing for conduit method from ACPI. Sep 13 00:01:52.882688 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:01:52.882697 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:01:52.882704 kernel: psci: Trusted OS migration not required Sep 13 00:01:52.882711 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:01:52.882719 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:01:52.882726 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:01:52.882733 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:01:52.882740 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:01:52.882746 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:01:52.882753 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:01:52.882760 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:01:52.882767 kernel: CPU features: detected: Spectre-v4 Sep 13 00:01:52.882773 kernel: CPU features: detected: Spectre-BHB Sep 13 00:01:52.882780 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:01:52.882821 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:01:52.882828 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:01:52.882835 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:01:52.882842 kernel: alternatives: applying boot alternatives Sep 13 00:01:52.882850 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:01:52.882857 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:01:52.882864 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:01:52.882870 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:01:52.882877 kernel: Fallback order for Node 0: 0 Sep 13 00:01:52.882884 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 13 00:01:52.882890 kernel: Policy zone: Normal Sep 13 00:01:52.882910 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:01:52.882917 kernel: software IO TLB: area num 2. Sep 13 00:01:52.882924 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 13 00:01:52.882931 kernel: Memory: 3882740K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 213260K reserved, 0K cma-reserved) Sep 13 00:01:52.882938 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:01:52.882945 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:01:52.882952 kernel: rcu: RCU event tracing is enabled. Sep 13 00:01:52.882959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:01:52.882965 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:01:52.882972 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:01:52.882979 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:01:52.882987 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:01:52.882994 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:01:52.883000 kernel: GICv3: 256 SPIs implemented Sep 13 00:01:52.883007 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:01:52.883014 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:01:52.883020 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:01:52.883027 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:01:52.883034 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:01:52.883041 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:01:52.883048 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:01:52.883054 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 13 00:01:52.883061 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 13 00:01:52.883069 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:01:52.883076 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:01:52.883083 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:01:52.883090 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:01:52.883097 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:01:52.883103 kernel: Console: colour dummy device 80x25 Sep 13 00:01:52.883110 kernel: ACPI: Core revision 20230628 Sep 13 00:01:52.883118 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:01:52.883125 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:01:52.883132 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:01:52.883140 kernel: landlock: Up and running. Sep 13 00:01:52.883146 kernel: SELinux: Initializing. Sep 13 00:01:52.883153 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:01:52.883160 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:01:52.883168 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:01:52.883175 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:01:52.883182 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:01:52.883189 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:01:52.883195 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:01:52.883204 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:01:52.883211 kernel: Remapping and enabling EFI services. Sep 13 00:01:52.883218 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:01:52.883225 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:01:52.883232 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:01:52.883239 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 13 00:01:52.883246 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:01:52.883252 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:01:52.883259 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:01:52.883266 kernel: SMP: Total of 2 processors activated. Sep 13 00:01:52.883275 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:01:52.883282 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:01:52.883294 kernel: CPU features: detected: Common not Private translations Sep 13 00:01:52.883303 kernel: CPU features: detected: CRC32 instructions Sep 13 00:01:52.883310 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:01:52.883318 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:01:52.883325 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:01:52.883332 kernel: CPU features: detected: Privileged Access Never Sep 13 00:01:52.883340 kernel: CPU features: detected: RAS Extension Support Sep 13 00:01:52.883348 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:01:52.883356 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:01:52.883363 kernel: alternatives: applying system-wide alternatives Sep 13 00:01:52.883370 kernel: devtmpfs: initialized Sep 13 00:01:52.883378 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:01:52.883385 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:01:52.883392 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:01:52.883401 kernel: SMBIOS 3.0.0 present. Sep 13 00:01:52.883408 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 13 00:01:52.883415 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:01:52.883423 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:01:52.883430 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:01:52.883438 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:01:52.883445 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:01:52.883452 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Sep 13 00:01:52.883459 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:01:52.883468 kernel: cpuidle: using governor menu Sep 13 00:01:52.883476 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:01:52.883483 kernel: ASID allocator initialised with 32768 entries Sep 13 00:01:52.883490 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:01:52.883498 kernel: Serial: AMBA PL011 UART driver Sep 13 00:01:52.883505 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:01:52.883512 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:01:52.883519 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:01:52.883527 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:01:52.883536 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:01:52.883544 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:01:52.883551 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:01:52.883559 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:01:52.883566 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:01:52.883573 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:01:52.883580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:01:52.883588 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:01:52.883595 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:01:52.883603 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:01:52.883611 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:01:52.883619 kernel: ACPI: Interpreter enabled Sep 13 00:01:52.883626 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:01:52.883633 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:01:52.883641 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:01:52.883648 kernel: printk: console [ttyAMA0] enabled Sep 13 00:01:52.883655 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:01:52.883802 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:01:52.883887 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:01:52.885424 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:01:52.885495 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:01:52.885559 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:01:52.885568 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:01:52.885576 kernel: PCI host bridge to bus 0000:00 Sep 13 00:01:52.885649 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:01:52.885714 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:01:52.885771 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:01:52.885852 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:01:52.888035 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:01:52.888123 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 13 00:01:52.888189 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 13 00:01:52.888261 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:01:52.888334 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.888401 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 13 00:01:52.888474 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.888540 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 13 00:01:52.888616 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.888684 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 13 00:01:52.888754 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.888839 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 13 00:01:52.888927 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.888997 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 13 00:01:52.889068 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.890019 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 13 00:01:52.890107 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.890172 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 13 00:01:52.890245 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.890321 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 13 00:01:52.890393 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:01:52.890458 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 13 00:01:52.890537 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 13 00:01:52.890602 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 13 00:01:52.890681 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:01:52.890751 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 13 00:01:52.890836 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:01:52.890918 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:01:52.891002 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:01:52.891070 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 13 00:01:52.891147 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:01:52.891222 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 13 00:01:52.891289 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 13 00:01:52.891364 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:01:52.891433 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 13 00:01:52.891510 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:01:52.891578 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 13 00:01:52.891660 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:01:52.891728 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 13 00:01:52.891810 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:01:52.891890 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:01:52.891973 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 13 00:01:52.892041 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 13 00:01:52.892107 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:01:52.892174 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 13 00:01:52.892239 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:01:52.892303 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:01:52.892374 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 13 00:01:52.892438 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 13 00:01:52.892502 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 13 00:01:52.892569 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 13 00:01:52.892633 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:01:52.892698 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:01:52.892764 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 13 00:01:52.892865 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 13 00:01:52.895263 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 13 00:01:52.895346 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 13 00:01:52.895412 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 13 00:01:52.895488 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 13 00:01:52.895554 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 00:01:52.895621 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:01:52.895690 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:01:52.895772 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:01:52.897436 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:01:52.897533 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:01:52.897602 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:01:52.897666 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:01:52.897730 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:01:52.897836 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:01:52.898032 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:01:52.898114 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:01:52.898179 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 13 00:01:52.898242 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:01:52.898308 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 13 00:01:52.898371 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:01:52.898435 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 13 00:01:52.898502 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:01:52.898567 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 13 00:01:52.898630 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:01:52.898698 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 13 00:01:52.898806 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:01:52.898900 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 13 00:01:52.898975 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:01:52.899046 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 13 00:01:52.899110 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:01:52.899173 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 13 00:01:52.899237 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:01:52.899302 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 13 00:01:52.899366 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:01:52.899436 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 13 00:01:52.899504 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 13 00:01:52.899570 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 13 00:01:52.899634 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:01:52.899700 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 13 00:01:52.899765 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:01:52.899848 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 13 00:01:52.901277 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:01:52.901359 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 13 00:01:52.901429 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 13 00:01:52.901496 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 13 00:01:52.901560 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 13 00:01:52.901629 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 13 00:01:52.901693 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 13 00:01:52.901759 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 13 00:01:52.901848 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 13 00:01:52.901941 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 13 00:01:52.902012 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 13 00:01:52.902079 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 13 00:01:52.902143 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 13 00:01:52.902213 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 13 00:01:52.902286 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 13 00:01:52.902354 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:01:52.902422 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 13 00:01:52.902487 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:01:52.902554 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 13 00:01:52.902619 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 13 00:01:52.902683 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:01:52.902754 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 13 00:01:52.902860 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:01:52.903638 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 13 00:01:52.903717 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 13 00:01:52.903797 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:01:52.903880 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:01:52.904049 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 13 00:01:52.904120 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:01:52.904182 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 13 00:01:52.904251 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 13 00:01:52.904313 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:01:52.904383 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:01:52.904448 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:01:52.904511 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 13 00:01:52.904573 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 13 00:01:52.904635 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:01:52.904706 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 13 00:01:52.904773 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:01:52.904852 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 13 00:01:52.904978 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 13 00:01:52.905045 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:01:52.905117 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 13 00:01:52.905184 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 13 00:01:52.905251 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:01:52.905316 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 13 00:01:52.905386 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 13 00:01:52.905451 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:01:52.905523 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 13 00:01:52.905589 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 13 00:01:52.905655 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 13 00:01:52.905719 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:01:52.905818 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 13 00:01:52.905971 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 13 00:01:52.906053 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:01:52.906119 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:01:52.906182 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 13 00:01:52.906245 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 13 00:01:52.906307 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:01:52.906373 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:01:52.906437 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 13 00:01:52.906501 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 13 00:01:52.906568 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:01:52.906632 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:01:52.906689 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:01:52.906745 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:01:52.906829 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 13 00:01:52.906891 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 13 00:01:52.907014 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:01:52.907092 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 13 00:01:52.907151 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 13 00:01:52.907208 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:01:52.907274 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 13 00:01:52.907333 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 13 00:01:52.907391 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:01:52.907459 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 13 00:01:52.907517 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 13 00:01:52.907577 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:01:52.907652 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 13 00:01:52.907712 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 13 00:01:52.907770 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:01:52.907868 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 13 00:01:52.909042 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 13 00:01:52.909118 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:01:52.909193 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 13 00:01:52.909253 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 13 00:01:52.909317 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:01:52.909384 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 13 00:01:52.909445 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 13 00:01:52.909504 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:01:52.909570 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 13 00:01:52.909634 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 13 00:01:52.909695 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:01:52.909708 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:01:52.909716 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:01:52.909724 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:01:52.909732 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:01:52.909740 kernel: iommu: Default domain type: Translated Sep 13 00:01:52.909748 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:01:52.909755 kernel: efivars: Registered efivars operations Sep 13 00:01:52.909763 kernel: vgaarb: loaded Sep 13 00:01:52.909771 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:01:52.909780 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:01:52.909821 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:01:52.909829 kernel: pnp: PnP ACPI init Sep 13 00:01:52.909929 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:01:52.909942 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:01:52.909950 kernel: NET: Registered PF_INET protocol family Sep 13 00:01:52.909958 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:01:52.909966 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:01:52.909978 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:01:52.909986 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:01:52.909994 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:01:52.910002 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:01:52.910010 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:01:52.910020 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:01:52.910029 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:01:52.910109 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 13 00:01:52.910121 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:01:52.910131 kernel: kvm [1]: HYP mode not available Sep 13 00:01:52.910139 kernel: Initialise system trusted keyrings Sep 13 00:01:52.910146 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:01:52.910154 kernel: Key type asymmetric registered Sep 13 00:01:52.910162 kernel: Asymmetric key parser 'x509' registered Sep 13 00:01:52.910169 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:01:52.910177 kernel: io scheduler mq-deadline registered Sep 13 00:01:52.910185 kernel: io scheduler kyber registered Sep 13 00:01:52.910193 kernel: io scheduler bfq registered Sep 13 00:01:52.910203 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 13 00:01:52.910270 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 13 00:01:52.910337 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 13 00:01:52.910403 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.910470 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 13 00:01:52.910536 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 13 00:01:52.910604 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.910672 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 13 00:01:52.910739 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 13 00:01:52.910822 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.910891 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 13 00:01:52.914065 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 13 00:01:52.914143 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.914262 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 13 00:01:52.914471 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 13 00:01:52.914540 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.914608 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 13 00:01:52.914673 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 13 00:01:52.914743 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.914839 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 13 00:01:52.914922 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 13 00:01:52.914994 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.915063 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 13 00:01:52.915133 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 13 00:01:52.915200 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.915210 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 13 00:01:52.915275 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 13 00:01:52.915341 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 13 00:01:52.915406 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:01:52.915416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:01:52.915424 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:01:52.915434 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:01:52.915504 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 13 00:01:52.915577 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 13 00:01:52.915588 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:01:52.915596 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 13 00:01:52.915662 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 13 00:01:52.915673 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 13 00:01:52.915681 kernel: thunder_xcv, ver 1.0 Sep 13 00:01:52.915689 kernel: thunder_bgx, ver 1.0 Sep 13 00:01:52.915699 kernel: nicpf, ver 1.0 Sep 13 00:01:52.915707 kernel: nicvf, ver 1.0 Sep 13 00:01:52.915792 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:01:52.915863 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:01:52 UTC (1757721712) Sep 13 00:01:52.915873 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:01:52.915881 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:01:52.915889 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:01:52.915918 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:01:52.915930 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:01:52.915938 kernel: Segment Routing with IPv6 Sep 13 00:01:52.915946 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:01:52.915955 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:01:52.915962 kernel: Key type dns_resolver registered Sep 13 00:01:52.915970 kernel: registered taskstats version 1 Sep 13 00:01:52.915978 kernel: Loading compiled-in X.509 certificates Sep 13 00:01:52.915986 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:01:52.915994 kernel: Key type .fscrypt registered Sep 13 00:01:52.916003 kernel: Key type fscrypt-provisioning registered Sep 13 00:01:52.916011 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:01:52.916019 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:01:52.916027 kernel: ima: No architecture policies found Sep 13 00:01:52.916034 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:01:52.916042 kernel: clk: Disabling unused clocks Sep 13 00:01:52.916050 kernel: Freeing unused kernel memory: 39488K Sep 13 00:01:52.916058 kernel: Run /init as init process Sep 13 00:01:52.916065 kernel: with arguments: Sep 13 00:01:52.916075 kernel: /init Sep 13 00:01:52.916083 kernel: with environment: Sep 13 00:01:52.916090 kernel: HOME=/ Sep 13 00:01:52.916098 kernel: TERM=linux Sep 13 00:01:52.916105 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:01:52.916115 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:01:52.916125 systemd[1]: Detected virtualization kvm. Sep 13 00:01:52.916133 systemd[1]: Detected architecture arm64. Sep 13 00:01:52.916143 systemd[1]: Running in initrd. Sep 13 00:01:52.916151 systemd[1]: No hostname configured, using default hostname. Sep 13 00:01:52.916159 systemd[1]: Hostname set to . Sep 13 00:01:52.916168 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:01:52.916176 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:01:52.916185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:01:52.916193 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:01:52.916202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:01:52.916213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:01:52.916222 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:01:52.916230 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:01:52.916240 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:01:52.916249 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:01:52.916257 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:01:52.916266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:01:52.916276 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:01:52.916284 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:01:52.916293 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:01:52.916301 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:01:52.916309 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:01:52.916318 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:01:52.916326 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:01:52.916334 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:01:52.916345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:01:52.916353 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:01:52.916361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:01:52.916370 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:01:52.916378 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:01:52.916387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:01:52.916395 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:01:52.916403 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:01:52.916412 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:01:52.916422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:01:52.916431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:52.916439 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:01:52.916467 systemd-journald[235]: Collecting audit messages is disabled. Sep 13 00:01:52.916491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:01:52.916499 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:01:52.916509 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:01:52.916517 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:52.916527 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:01:52.916535 kernel: Bridge firewalling registered Sep 13 00:01:52.916544 systemd-journald[235]: Journal started Sep 13 00:01:52.916563 systemd-journald[235]: Runtime Journal (/run/log/journal/97c96ec701f940e39b797134b5d5f1a4) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:01:52.918242 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:01:52.894494 systemd-modules-load[236]: Inserted module 'overlay' Sep 13 00:01:52.914732 systemd-modules-load[236]: Inserted module 'br_netfilter' Sep 13 00:01:52.921943 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:01:52.921982 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:01:52.922263 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:01:52.931187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:01:52.933173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:01:52.937060 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:01:52.937909 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:01:52.942969 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:01:52.945367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:01:52.955270 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:01:52.958924 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:01:52.965516 dracut-cmdline[265]: dracut-dracut-053 Sep 13 00:01:52.966089 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:01:52.970208 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:01:52.999752 systemd-resolved[275]: Positive Trust Anchors: Sep 13 00:01:52.999771 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:01:52.999838 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:01:53.005756 systemd-resolved[275]: Defaulting to hostname 'linux'. Sep 13 00:01:53.007440 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:01:53.008672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:01:53.067991 kernel: SCSI subsystem initialized Sep 13 00:01:53.072938 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:01:53.079958 kernel: iscsi: registered transport (tcp) Sep 13 00:01:53.093947 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:01:53.094019 kernel: QLogic iSCSI HBA Driver Sep 13 00:01:53.154842 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:01:53.161077 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:01:53.177166 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:01:53.177235 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:01:53.178063 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:01:53.225964 kernel: raid6: neonx8 gen() 15670 MB/s Sep 13 00:01:53.242942 kernel: raid6: neonx4 gen() 15585 MB/s Sep 13 00:01:53.259965 kernel: raid6: neonx2 gen() 13151 MB/s Sep 13 00:01:53.276958 kernel: raid6: neonx1 gen() 10511 MB/s Sep 13 00:01:53.293959 kernel: raid6: int64x8 gen() 6922 MB/s Sep 13 00:01:53.310953 kernel: raid6: int64x4 gen() 7309 MB/s Sep 13 00:01:53.327951 kernel: raid6: int64x2 gen() 6099 MB/s Sep 13 00:01:53.345043 kernel: raid6: int64x1 gen() 5024 MB/s Sep 13 00:01:53.345136 kernel: raid6: using algorithm neonx8 gen() 15670 MB/s Sep 13 00:01:53.361957 kernel: raid6: .... xor() 11848 MB/s, rmw enabled Sep 13 00:01:53.362026 kernel: raid6: using neon recovery algorithm Sep 13 00:01:53.366959 kernel: xor: measuring software checksum speed Sep 13 00:01:53.367022 kernel: 8regs : 19750 MB/sec Sep 13 00:01:53.367039 kernel: 32regs : 19674 MB/sec Sep 13 00:01:53.367969 kernel: arm64_neon : 27007 MB/sec Sep 13 00:01:53.368024 kernel: xor: using function: arm64_neon (27007 MB/sec) Sep 13 00:01:53.418301 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:01:53.431108 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:01:53.438101 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:01:53.453428 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 13 00:01:53.456916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:01:53.466138 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:01:53.480325 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 13 00:01:53.514595 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:01:53.520202 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:01:53.569962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:01:53.578162 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:01:53.608261 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:01:53.609344 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:01:53.611515 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:01:53.613106 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:01:53.620715 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:01:53.636579 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:01:53.679493 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:01:53.685958 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:01:53.688915 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:01:53.690910 kernel: ACPI: bus type USB registered Sep 13 00:01:53.692188 kernel: usbcore: registered new interface driver usbfs Sep 13 00:01:53.692217 kernel: usbcore: registered new interface driver hub Sep 13 00:01:53.692245 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:01:53.692363 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:01:53.695095 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:01:53.695653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:01:53.703013 kernel: usbcore: registered new device driver usb Sep 13 00:01:53.695810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:53.700990 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:53.708165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:53.722993 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 13 00:01:53.724955 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 13 00:01:53.725156 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:01:53.726924 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:01:53.742306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:53.749199 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 13 00:01:53.750385 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:01:53.750487 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 13 00:01:53.750570 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 13 00:01:53.750649 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:01:53.751351 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:01:53.756605 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:01:53.756652 kernel: GPT:17805311 != 80003071 Sep 13 00:01:53.756663 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:01:53.756673 kernel: GPT:17805311 != 80003071 Sep 13 00:01:53.757127 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:01:53.758051 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:01:53.759477 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 13 00:01:53.766429 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:01:53.766631 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:01:53.766723 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:01:53.769656 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:01:53.769867 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:01:53.770917 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:01:53.773172 kernel: hub 1-0:1.0: USB hub found Sep 13 00:01:53.773350 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:01:53.774477 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:01:53.775339 kernel: hub 2-0:1.0: USB hub found Sep 13 00:01:53.775491 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:01:53.781412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:01:53.811924 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (508) Sep 13 00:01:53.811977 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (501) Sep 13 00:01:53.823617 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:01:53.830788 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:01:53.837080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:01:53.837808 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:01:53.846952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:01:53.858276 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:01:53.864591 disk-uuid[571]: Primary Header is updated. Sep 13 00:01:53.864591 disk-uuid[571]: Secondary Entries is updated. Sep 13 00:01:53.864591 disk-uuid[571]: Secondary Header is updated. Sep 13 00:01:53.868565 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:01:54.012327 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:01:54.147931 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 13 00:01:54.148002 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:01:54.148935 kernel: usbcore: registered new interface driver usbhid Sep 13 00:01:54.148960 kernel: usbhid: USB HID core driver Sep 13 00:01:54.255052 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 13 00:01:54.386939 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 13 00:01:54.440985 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 13 00:01:54.882036 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:01:54.882301 disk-uuid[572]: The operation has completed successfully. Sep 13 00:01:54.931912 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:01:54.932951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:01:54.952182 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:01:54.956503 sh[590]: Success Sep 13 00:01:54.972990 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:01:55.025527 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:01:55.035075 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:01:55.036546 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:01:55.060453 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:01:55.060524 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:01:55.060544 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:01:55.060562 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:01:55.061228 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:01:55.067937 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:01:55.070098 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:01:55.072699 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:01:55.090333 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:01:55.094587 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:01:55.106970 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:55.107035 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:01:55.107047 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:01:55.113916 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:01:55.113976 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:01:55.125944 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:55.125873 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:01:55.133451 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:01:55.140070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:01:55.222493 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:01:55.232104 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:01:55.240280 ignition[672]: Ignition 2.19.0 Sep 13 00:01:55.240464 ignition[672]: Stage: fetch-offline Sep 13 00:01:55.243175 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:01:55.240512 ignition[672]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:55.240521 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:55.240682 ignition[672]: parsed url from cmdline: "" Sep 13 00:01:55.240685 ignition[672]: no config URL provided Sep 13 00:01:55.240690 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:01:55.240698 ignition[672]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:01:55.240703 ignition[672]: failed to fetch config: resource requires networking Sep 13 00:01:55.240918 ignition[672]: Ignition finished successfully Sep 13 00:01:55.266624 systemd-networkd[777]: lo: Link UP Sep 13 00:01:55.266639 systemd-networkd[777]: lo: Gained carrier Sep 13 00:01:55.268344 systemd-networkd[777]: Enumeration completed Sep 13 00:01:55.268764 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:01:55.268806 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:55.268810 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:01:55.269869 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:55.269872 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:01:55.270382 systemd-networkd[777]: eth0: Link UP Sep 13 00:01:55.270385 systemd-networkd[777]: eth0: Gained carrier Sep 13 00:01:55.270392 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:55.273085 systemd[1]: Reached target network.target - Network. Sep 13 00:01:55.275237 systemd-networkd[777]: eth1: Link UP Sep 13 00:01:55.275241 systemd-networkd[777]: eth1: Gained carrier Sep 13 00:01:55.275251 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:55.282087 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:01:55.295260 ignition[780]: Ignition 2.19.0 Sep 13 00:01:55.295280 ignition[780]: Stage: fetch Sep 13 00:01:55.295524 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:55.295539 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:55.295651 ignition[780]: parsed url from cmdline: "" Sep 13 00:01:55.295654 ignition[780]: no config URL provided Sep 13 00:01:55.295659 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:01:55.295666 ignition[780]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:01:55.295730 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:01:55.297334 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:01:55.323000 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:01:55.340032 systemd-networkd[777]: eth0: DHCPv4 address 188.245.230.74/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:01:55.497523 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:01:55.503274 ignition[780]: GET result: OK Sep 13 00:01:55.503431 ignition[780]: parsing config with SHA512: 33f555e19940f28d2fddc0deb31e2b9e18e00eb0a5d3bc54a028780f84fc05ad8b0604c28561824efcf82866751298c79ae6db0cca7ed4d40dab75babafb664f Sep 13 00:01:55.511873 unknown[780]: fetched base config from "system" Sep 13 00:01:55.512489 unknown[780]: fetched base config from "system" Sep 13 00:01:55.512915 ignition[780]: fetch: fetch complete Sep 13 00:01:55.512498 unknown[780]: fetched user config from "hetzner" Sep 13 00:01:55.512920 ignition[780]: fetch: fetch passed Sep 13 00:01:55.516485 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:01:55.512971 ignition[780]: Ignition finished successfully Sep 13 00:01:55.527201 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:01:55.539326 ignition[787]: Ignition 2.19.0 Sep 13 00:01:55.539347 ignition[787]: Stage: kargs Sep 13 00:01:55.539571 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:55.539582 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:55.540799 ignition[787]: kargs: kargs passed Sep 13 00:01:55.541940 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:01:55.540858 ignition[787]: Ignition finished successfully Sep 13 00:01:55.549092 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:01:55.562819 ignition[793]: Ignition 2.19.0 Sep 13 00:01:55.562829 ignition[793]: Stage: disks Sep 13 00:01:55.563055 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:55.566294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:01:55.563065 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:55.568177 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:01:55.564001 ignition[793]: disks: disks passed Sep 13 00:01:55.569858 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:01:55.564057 ignition[793]: Ignition finished successfully Sep 13 00:01:55.571566 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:01:55.572503 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:01:55.573313 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:01:55.581130 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:01:55.596081 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:01:55.602996 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:01:55.610100 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:01:55.665988 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:01:55.666603 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:01:55.668476 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:01:55.675083 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:01:55.678262 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:01:55.681094 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:01:55.686988 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (810) Sep 13 00:01:55.687033 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:55.687192 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:01:55.691664 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:01:55.691704 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:01:55.690286 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:01:55.694525 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:01:55.697915 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:01:55.697973 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:01:55.702189 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:01:55.708288 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:01:55.757276 coreos-metadata[812]: Sep 13 00:01:55.756 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:01:55.758957 coreos-metadata[812]: Sep 13 00:01:55.758 INFO Fetch successful Sep 13 00:01:55.761989 coreos-metadata[812]: Sep 13 00:01:55.760 INFO wrote hostname ci-4081-3-5-n-d78c7abf5e to /sysroot/etc/hostname Sep 13 00:01:55.763353 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:01:55.766104 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:01:55.772135 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:01:55.777078 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:01:55.781267 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:01:55.879757 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:01:55.890080 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:01:55.895115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:01:55.898934 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:55.927489 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:01:55.928558 ignition[927]: INFO : Ignition 2.19.0 Sep 13 00:01:55.928558 ignition[927]: INFO : Stage: mount Sep 13 00:01:55.928558 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:55.928558 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:55.930817 ignition[927]: INFO : mount: mount passed Sep 13 00:01:55.932589 ignition[927]: INFO : Ignition finished successfully Sep 13 00:01:55.933104 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:01:55.939040 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:01:56.061038 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:01:56.067162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:01:56.077946 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (938) Sep 13 00:01:56.079918 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:01:56.079970 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:01:56.079988 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:01:56.083023 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:01:56.083061 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:01:56.087306 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:01:56.113947 ignition[956]: INFO : Ignition 2.19.0 Sep 13 00:01:56.113947 ignition[956]: INFO : Stage: files Sep 13 00:01:56.113947 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:56.113947 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:56.116706 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:01:56.118535 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:01:56.118535 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:01:56.121484 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:01:56.122416 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:01:56.123661 unknown[956]: wrote ssh authorized keys file for user: core Sep 13 00:01:56.125111 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:01:56.126506 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:01:56.127977 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:01:56.218919 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:01:56.413802 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:01:56.413802 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:01:56.419198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:01:56.527271 systemd-networkd[777]: eth1: Gained IPv6LL Sep 13 00:01:56.591462 systemd-networkd[777]: eth0: Gained IPv6LL Sep 13 00:01:56.676428 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:01:56.888405 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:01:56.888405 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:01:56.891117 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:01:56.891117 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:01:56.891117 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:01:56.891117 ignition[956]: INFO : files: files passed Sep 13 00:01:56.891117 ignition[956]: INFO : Ignition finished successfully Sep 13 00:01:56.891928 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:01:56.899051 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:01:56.903539 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:01:56.908290 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:01:56.908405 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:01:56.923402 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:01:56.923402 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:01:56.925909 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:01:56.928355 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:01:56.929951 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:01:56.938161 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:01:56.978934 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:01:56.979105 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:01:56.983011 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:01:56.984314 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:01:56.986266 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:01:56.987577 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:01:57.014812 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:01:57.023137 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:01:57.038283 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:01:57.039664 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:01:57.041269 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:01:57.042958 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:01:57.043190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:01:57.045032 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:01:57.045885 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:01:57.046697 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:01:57.048363 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:01:57.050037 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:01:57.051526 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:01:57.052531 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:01:57.053638 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:01:57.054627 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:01:57.055505 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:01:57.056318 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:01:57.056501 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:01:57.057608 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:01:57.058639 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:01:57.059594 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:01:57.059701 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:01:57.060691 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:01:57.060911 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:01:57.062292 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:01:57.062456 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:01:57.063445 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:01:57.063602 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:01:57.064421 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:01:57.064567 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:01:57.070134 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:01:57.071788 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:01:57.072323 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:01:57.074972 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:01:57.075734 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:01:57.075878 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:01:57.084128 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:01:57.084951 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:01:57.092466 ignition[1007]: INFO : Ignition 2.19.0 Sep 13 00:01:57.092466 ignition[1007]: INFO : Stage: umount Sep 13 00:01:57.094416 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:01:57.094416 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:01:57.094416 ignition[1007]: INFO : umount: umount passed Sep 13 00:01:57.094416 ignition[1007]: INFO : Ignition finished successfully Sep 13 00:01:57.095281 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:01:57.095386 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:01:57.100370 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:01:57.101224 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:01:57.101327 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:01:57.103498 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:01:57.103552 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:01:57.104287 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:01:57.104326 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:01:57.104955 systemd[1]: Stopped target network.target - Network. Sep 13 00:01:57.105455 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:01:57.105504 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:01:57.110212 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:01:57.111760 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:01:57.116773 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:01:57.122144 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:01:57.122852 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:01:57.123642 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:01:57.123695 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:01:57.126344 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:01:57.126387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:01:57.126995 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:01:57.127043 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:01:57.127596 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:01:57.127633 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:01:57.131007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:01:57.132786 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:01:57.134826 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:01:57.134938 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:01:57.136048 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:01:57.136141 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:01:57.136983 systemd-networkd[777]: eth0: DHCPv6 lease lost Sep 13 00:01:57.140131 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:01:57.140294 systemd-networkd[777]: eth1: DHCPv6 lease lost Sep 13 00:01:57.141400 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:01:57.144665 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:01:57.144805 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:01:57.146331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:01:57.146381 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:01:57.154003 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:01:57.154474 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:01:57.154532 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:01:57.156483 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:01:57.156528 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:01:57.157540 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:01:57.157578 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:01:57.158682 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:01:57.158727 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:01:57.159934 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:01:57.173183 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:01:57.173326 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:01:57.174803 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:01:57.174958 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:01:57.176296 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:01:57.176363 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:01:57.177691 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:01:57.177746 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:01:57.178700 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:01:57.178757 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:01:57.180324 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:01:57.180369 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:01:57.182056 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:01:57.182102 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:01:57.194630 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:01:57.196256 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:01:57.196368 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:01:57.201137 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:01:57.201215 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:01:57.205041 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:01:57.205090 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:01:57.206250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:01:57.206291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:57.208840 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:01:57.209070 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:01:57.211769 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:01:57.222513 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:01:57.231183 systemd[1]: Switching root. Sep 13 00:01:57.268921 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Sep 13 00:01:57.269016 systemd-journald[235]: Journal stopped Sep 13 00:01:58.115476 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:01:58.115544 kernel: SELinux: policy capability open_perms=1 Sep 13 00:01:58.115558 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:01:58.115574 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:01:58.115583 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:01:58.115593 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:01:58.115603 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:01:58.115612 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:01:58.115622 kernel: audit: type=1403 audit(1757721717.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:01:58.115634 systemd[1]: Successfully loaded SELinux policy in 33.777ms. Sep 13 00:01:58.115656 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.566ms. Sep 13 00:01:58.115668 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:01:58.115679 systemd[1]: Detected virtualization kvm. Sep 13 00:01:58.115690 systemd[1]: Detected architecture arm64. Sep 13 00:01:58.115713 systemd[1]: Detected first boot. Sep 13 00:01:58.115724 systemd[1]: Hostname set to . Sep 13 00:01:58.115740 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:01:58.115751 zram_generator::config[1052]: No configuration found. Sep 13 00:01:58.115764 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:01:58.115775 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:01:58.115785 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:01:58.115796 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:01:58.115807 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:01:58.115818 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:01:58.115829 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:01:58.115839 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:01:58.115851 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:01:58.115862 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:01:58.115873 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:01:58.115888 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:01:58.116062 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:01:58.116077 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:01:58.116088 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:01:58.116099 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:01:58.116110 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:01:58.116124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:01:58.116134 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:01:58.116145 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:01:58.116156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:01:58.116166 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:01:58.116177 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:01:58.116190 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:01:58.116200 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:01:58.116215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:01:58.116229 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:01:58.116240 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:01:58.116251 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:01:58.116261 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:01:58.116272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:01:58.116283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:01:58.116294 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:01:58.116306 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:01:58.116317 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:01:58.116328 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:01:58.116339 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:01:58.116350 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:01:58.116360 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:01:58.116371 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:01:58.116382 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:01:58.116394 systemd[1]: Reached target machines.target - Containers. Sep 13 00:01:58.116405 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:01:58.116420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:58.116431 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:01:58.116443 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:01:58.116456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:01:58.116470 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:01:58.116481 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:01:58.116492 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:01:58.116503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:01:58.116515 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:01:58.116525 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:01:58.116536 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:01:58.116549 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:01:58.116561 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:01:58.116572 kernel: loop: module loaded Sep 13 00:01:58.116582 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:01:58.116593 kernel: ACPI: bus type drm_connector registered Sep 13 00:01:58.116603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:01:58.116614 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:01:58.116628 kernel: fuse: init (API version 7.39) Sep 13 00:01:58.116638 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:01:58.116649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:01:58.116660 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:01:58.116672 systemd[1]: Stopped verity-setup.service. Sep 13 00:01:58.116683 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:01:58.116707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:01:58.116721 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:01:58.116734 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:01:58.116745 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:01:58.116756 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:01:58.116766 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:01:58.116778 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:01:58.116810 systemd-journald[1126]: Collecting audit messages is disabled. Sep 13 00:01:58.116835 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:01:58.116849 systemd-journald[1126]: Journal started Sep 13 00:01:58.116870 systemd-journald[1126]: Runtime Journal (/run/log/journal/97c96ec701f940e39b797134b5d5f1a4) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:01:57.856498 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:01:57.875005 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:01:57.875747 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:01:58.118518 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:01:58.121678 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:01:58.122920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:01:58.123086 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:01:58.123988 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:01:58.124126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:01:58.124955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:01:58.125067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:01:58.125991 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:01:58.126105 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:01:58.127225 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:01:58.127354 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:01:58.128188 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:01:58.129081 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:01:58.130042 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:01:58.141415 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:01:58.148130 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:01:58.152420 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:01:58.153116 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:01:58.153147 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:01:58.154616 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:01:58.157124 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:01:58.163162 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:01:58.163828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:58.165880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:01:58.169071 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:01:58.169651 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:01:58.172836 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:01:58.173542 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:01:58.175872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:01:58.182279 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:01:58.188118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:01:58.192353 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:01:58.194234 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:01:58.196956 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:01:58.229653 systemd-journald[1126]: Time spent on flushing to /var/log/journal/97c96ec701f940e39b797134b5d5f1a4 is 50.544ms for 1124 entries. Sep 13 00:01:58.229653 systemd-journald[1126]: System Journal (/var/log/journal/97c96ec701f940e39b797134b5d5f1a4) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:01:58.308542 systemd-journald[1126]: Received client request to flush runtime journal. Sep 13 00:01:58.308594 kernel: loop0: detected capacity change from 0 to 114432 Sep 13 00:01:58.308623 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:01:58.308638 kernel: loop1: detected capacity change from 0 to 114328 Sep 13 00:01:58.232685 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:01:58.233821 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:01:58.242295 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:01:58.246093 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:01:58.257212 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:01:58.276290 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:01:58.280356 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Sep 13 00:01:58.280367 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Sep 13 00:01:58.294206 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:01:58.311193 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:01:58.316988 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:01:58.325731 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:01:58.327285 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:01:58.330830 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:01:58.357071 kernel: loop2: detected capacity change from 0 to 203944 Sep 13 00:01:58.369968 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:01:58.378119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:01:58.410833 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Sep 13 00:01:58.410853 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Sep 13 00:01:58.412266 kernel: loop3: detected capacity change from 0 to 8 Sep 13 00:01:58.420858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:01:58.432932 kernel: loop4: detected capacity change from 0 to 114432 Sep 13 00:01:58.448968 kernel: loop5: detected capacity change from 0 to 114328 Sep 13 00:01:58.462923 kernel: loop6: detected capacity change from 0 to 203944 Sep 13 00:01:58.481140 kernel: loop7: detected capacity change from 0 to 8 Sep 13 00:01:58.482229 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:01:58.482649 (sd-merge)[1194]: Merged extensions into '/usr'. Sep 13 00:01:58.491006 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:01:58.491021 systemd[1]: Reloading... Sep 13 00:01:58.604943 zram_generator::config[1223]: No configuration found. Sep 13 00:01:58.712934 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:01:58.754197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:01:58.800628 systemd[1]: Reloading finished in 309 ms. Sep 13 00:01:58.845505 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:01:58.850213 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:01:58.859018 systemd[1]: Starting ensure-sysext.service... Sep 13 00:01:58.860601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:01:58.870347 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:01:58.870487 systemd[1]: Reloading... Sep 13 00:01:58.900470 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:01:58.901201 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:01:58.904100 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:01:58.904705 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Sep 13 00:01:58.905021 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Sep 13 00:01:58.909610 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:01:58.912144 systemd-tmpfiles[1259]: Skipping /boot Sep 13 00:01:58.927048 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:01:58.927203 systemd-tmpfiles[1259]: Skipping /boot Sep 13 00:01:58.976920 zram_generator::config[1288]: No configuration found. Sep 13 00:01:59.069942 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:01:59.115742 systemd[1]: Reloading finished in 244 ms. Sep 13 00:01:59.132227 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:01:59.133257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:01:59.146208 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:01:59.149112 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:01:59.155115 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:01:59.161022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:01:59.169058 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:01:59.176059 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:01:59.187185 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:01:59.189126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:59.194016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:01:59.198314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:01:59.202156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:01:59.203127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:59.204734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:01:59.205788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:01:59.213143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:59.216987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:01:59.217778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:59.222321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:59.234414 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:01:59.235092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:59.241596 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Sep 13 00:01:59.247958 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:01:59.251469 systemd[1]: Finished ensure-sysext.service. Sep 13 00:01:59.260130 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:01:59.261670 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:01:59.262710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:01:59.270220 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:01:59.281337 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:01:59.307465 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:01:59.315355 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:01:59.322908 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:01:59.325539 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:01:59.338368 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 00:01:59.352291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:01:59.353923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:01:59.354743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:01:59.357037 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:01:59.357205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:01:59.358296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:01:59.358969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:01:59.367957 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:01:59.372119 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:01:59.373602 augenrules[1381]: No rules Sep 13 00:01:59.377742 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:01:59.435409 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:01:59.533919 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1361) Sep 13 00:01:59.534004 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:01:59.547490 systemd-networkd[1354]: lo: Link UP Sep 13 00:01:59.547502 systemd-networkd[1354]: lo: Gained carrier Sep 13 00:01:59.549110 systemd-networkd[1354]: Enumeration completed Sep 13 00:01:59.549218 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:01:59.551562 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:59.551574 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:01:59.552596 systemd-networkd[1354]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:59.552613 systemd-networkd[1354]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:01:59.553124 systemd-networkd[1354]: eth0: Link UP Sep 13 00:01:59.553135 systemd-networkd[1354]: eth0: Gained carrier Sep 13 00:01:59.553149 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:59.570726 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:01:59.575455 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Sep 13 00:01:59.577164 systemd-networkd[1354]: eth1: Link UP Sep 13 00:01:59.577227 systemd-networkd[1354]: eth1: Gained carrier Sep 13 00:01:59.578067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:01:59.578256 systemd-networkd[1354]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:59.580657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:01:59.584378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:01:59.585992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:01:59.586576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:01:59.586606 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:01:59.598883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:01:59.599060 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:01:59.600165 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:01:59.605851 systemd-resolved[1329]: Positive Trust Anchors: Sep 13 00:01:59.605873 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:01:59.606195 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:01:59.612320 systemd-networkd[1354]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:59.620233 systemd-resolved[1329]: Using system hostname 'ci-4081-3-5-n-d78c7abf5e'. Sep 13 00:01:59.628196 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:01:59.629549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:01:59.630819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:01:59.633663 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 13 00:01:59.633750 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:01:59.633764 kernel: [drm] features: -context_init Sep 13 00:01:59.633782 kernel: [drm] number of scanouts: 1 Sep 13 00:01:59.632947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:01:59.635553 kernel: [drm] number of cap sets: 0 Sep 13 00:01:59.635614 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:01:59.634856 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:01:59.635022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:01:59.639609 systemd[1]: Reached target network.target - Network. Sep 13 00:01:59.642150 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:01:59.647079 systemd-networkd[1354]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:01:59.649085 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 13 00:01:59.651913 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:01:59.653206 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:01:59.653302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:01:59.654075 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:01:59.655054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:01:59.672989 systemd-networkd[1354]: eth0: DHCPv4 address 188.245.230.74/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:01:59.674345 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 13 00:01:59.718648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:59.725049 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:01:59.736464 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:01:59.739411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:01:59.739594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:59.750187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:01:59.752839 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:01:59.811570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:01:59.856706 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:01:59.864372 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:01:59.881069 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:01:59.907727 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:01:59.910108 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:01:59.911418 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:01:59.913240 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:01:59.914265 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:01:59.915177 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:01:59.915882 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:01:59.916566 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:01:59.917358 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:01:59.917394 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:01:59.917884 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:01:59.919574 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:01:59.921724 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:01:59.927055 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:01:59.929623 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:01:59.931160 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:01:59.932097 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:01:59.932836 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:01:59.933593 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:01:59.933699 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:01:59.935131 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:01:59.938931 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:01:59.940449 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:01:59.943185 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:01:59.952094 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:01:59.956117 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:01:59.957443 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:01:59.961137 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:01:59.969074 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:01:59.973988 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:01:59.978979 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:01:59.987008 jq[1450]: false Sep 13 00:01:59.984112 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:01:59.992742 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:01:59.995496 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:01:59.996228 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:01:59.999932 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:02:00.004964 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:02:00.007793 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:02:00.016630 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:02:00.018024 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:02:00.050439 extend-filesystems[1451]: Found loop4 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found loop5 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found loop6 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found loop7 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found sda Sep 13 00:02:00.059223 extend-filesystems[1451]: Found sda1 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found sda2 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found sda3 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found usr Sep 13 00:02:00.059223 extend-filesystems[1451]: Found sda4 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found sda6 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found sda7 Sep 13 00:02:00.059223 extend-filesystems[1451]: Found sda9 Sep 13 00:02:00.059223 extend-filesystems[1451]: Checking size of /dev/sda9 Sep 13 00:02:00.058798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:02:00.065616 dbus-daemon[1449]: [system] SELinux support is enabled Sep 13 00:02:00.102386 coreos-metadata[1448]: Sep 13 00:02:00.080 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:02:00.102386 coreos-metadata[1448]: Sep 13 00:02:00.091 INFO Fetch successful Sep 13 00:02:00.102386 coreos-metadata[1448]: Sep 13 00:02:00.091 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:02:00.102386 coreos-metadata[1448]: Sep 13 00:02:00.100 INFO Fetch successful Sep 13 00:02:00.102647 extend-filesystems[1451]: Resized partition /dev/sda9 Sep 13 00:02:00.060987 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:02:00.108296 jq[1462]: true Sep 13 00:02:00.108478 extend-filesystems[1488]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:02:00.115239 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:02:00.065848 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:02:00.084477 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:02:00.115550 tar[1468]: linux-arm64/helm Sep 13 00:02:00.084507 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:02:00.085375 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:02:00.085479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:02:00.085515 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:02:00.112743 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:02:00.113976 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:02:00.124880 update_engine[1461]: I20250913 00:02:00.124637 1461 main.cc:92] Flatcar Update Engine starting Sep 13 00:02:00.130506 jq[1484]: true Sep 13 00:02:00.141548 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:02:00.146442 update_engine[1461]: I20250913 00:02:00.146363 1461 update_check_scheduler.cc:74] Next update check in 11m22s Sep 13 00:02:00.152226 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:02:00.230202 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1353) Sep 13 00:02:00.241237 systemd-logind[1459]: New seat seat0. Sep 13 00:02:00.265767 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:02:00.292456 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:02:00.279272 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:02:00.297111 extend-filesystems[1488]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:02:00.297111 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:02:00.297111 extend-filesystems[1488]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:02:00.307815 extend-filesystems[1451]: Resized filesystem in /dev/sda9 Sep 13 00:02:00.307815 extend-filesystems[1451]: Found sr0 Sep 13 00:02:00.299004 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:02:00.299048 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 13 00:02:00.309243 systemd[1]: Starting sshkeys.service... Sep 13 00:02:00.309823 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:02:00.318154 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:02:00.318338 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:02:00.357155 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:02:00.367303 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:02:00.374019 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:02:00.375089 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:02:00.451756 containerd[1482]: time="2025-09-13T00:02:00.449399880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:02:00.453622 coreos-metadata[1527]: Sep 13 00:02:00.453 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:02:00.462958 coreos-metadata[1527]: Sep 13 00:02:00.462 INFO Fetch successful Sep 13 00:02:00.465867 unknown[1527]: wrote ssh authorized keys file for user: core Sep 13 00:02:00.508782 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:02:00.511417 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:02:00.519954 systemd[1]: Finished sshkeys.service. Sep 13 00:02:00.536578 containerd[1482]: time="2025-09-13T00:02:00.536486400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545354 containerd[1482]: time="2025-09-13T00:02:00.545013360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545354 containerd[1482]: time="2025-09-13T00:02:00.545098440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:02:00.545354 containerd[1482]: time="2025-09-13T00:02:00.545118000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:02:00.545354 containerd[1482]: time="2025-09-13T00:02:00.545289000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:02:00.545354 containerd[1482]: time="2025-09-13T00:02:00.545354960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545534 containerd[1482]: time="2025-09-13T00:02:00.545433360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545534 containerd[1482]: time="2025-09-13T00:02:00.545446960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545849 containerd[1482]: time="2025-09-13T00:02:00.545624560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545849 containerd[1482]: time="2025-09-13T00:02:00.545659000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545849 containerd[1482]: time="2025-09-13T00:02:00.545678440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545849 containerd[1482]: time="2025-09-13T00:02:00.545689960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:00.545849 containerd[1482]: time="2025-09-13T00:02:00.545767120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:00.546242 containerd[1482]: time="2025-09-13T00:02:00.546071480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:02:00.546242 containerd[1482]: time="2025-09-13T00:02:00.546189880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:02:00.546242 containerd[1482]: time="2025-09-13T00:02:00.546205920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:02:00.546313 containerd[1482]: time="2025-09-13T00:02:00.546288360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:02:00.546598 containerd[1482]: time="2025-09-13T00:02:00.546351600Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:02:00.554225 containerd[1482]: time="2025-09-13T00:02:00.554174040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:02:00.554343 containerd[1482]: time="2025-09-13T00:02:00.554239520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:02:00.554343 containerd[1482]: time="2025-09-13T00:02:00.554256840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:02:00.554343 containerd[1482]: time="2025-09-13T00:02:00.554284080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:02:00.554343 containerd[1482]: time="2025-09-13T00:02:00.554298920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554457760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554706320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554806200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554823800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554837800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554851120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554863520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554875200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:02:00.554907 containerd[1482]: time="2025-09-13T00:02:00.554889840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.554925920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.554939200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.554952960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.554965800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.554985160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555003280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555022200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555035400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555066800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555080160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555091640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555108360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555121200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555138 containerd[1482]: time="2025-09-13T00:02:00.555135640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555147800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555160600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555173280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555192960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555212840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555225080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555235720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555339720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555356480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:02:00.555370 containerd[1482]: time="2025-09-13T00:02:00.555367160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:02:00.555533 containerd[1482]: time="2025-09-13T00:02:00.555379040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:02:00.555533 containerd[1482]: time="2025-09-13T00:02:00.555388800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.555533 containerd[1482]: time="2025-09-13T00:02:00.555400360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:02:00.555533 containerd[1482]: time="2025-09-13T00:02:00.555409840Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:02:00.555533 containerd[1482]: time="2025-09-13T00:02:00.555419880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:02:00.556369 containerd[1482]: time="2025-09-13T00:02:00.555797760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:02:00.556369 containerd[1482]: time="2025-09-13T00:02:00.555867640Z" level=info msg="Connect containerd service" Sep 13 00:02:00.557993 containerd[1482]: time="2025-09-13T00:02:00.557965040Z" level=info msg="using legacy CRI server" Sep 13 00:02:00.557993 containerd[1482]: time="2025-09-13T00:02:00.557985760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:02:00.558299 containerd[1482]: time="2025-09-13T00:02:00.558151840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:02:00.558974 containerd[1482]: time="2025-09-13T00:02:00.558945680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:02:00.559396 containerd[1482]: time="2025-09-13T00:02:00.559126920Z" level=info msg="Start subscribing containerd event" Sep 13 00:02:00.559396 containerd[1482]: time="2025-09-13T00:02:00.559174800Z" level=info msg="Start recovering state" Sep 13 00:02:00.559396 containerd[1482]: time="2025-09-13T00:02:00.559237240Z" level=info msg="Start event monitor" Sep 13 00:02:00.559396 containerd[1482]: time="2025-09-13T00:02:00.559247520Z" level=info msg="Start snapshots syncer" Sep 13 00:02:00.559396 containerd[1482]: time="2025-09-13T00:02:00.559255760Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:02:00.559396 containerd[1482]: time="2025-09-13T00:02:00.559262920Z" level=info msg="Start streaming server" Sep 13 00:02:00.563629 containerd[1482]: time="2025-09-13T00:02:00.563589640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:02:00.563705 containerd[1482]: time="2025-09-13T00:02:00.563685640Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:02:00.563850 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:02:00.567987 containerd[1482]: time="2025-09-13T00:02:00.567945120Z" level=info msg="containerd successfully booted in 0.122832s" Sep 13 00:02:00.633323 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:02:00.756493 tar[1468]: linux-arm64/LICENSE Sep 13 00:02:00.756493 tar[1468]: linux-arm64/README.md Sep 13 00:02:00.768973 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:02:01.144567 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:02:01.168817 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:02:01.175622 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:02:01.186521 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:02:01.186788 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:02:01.196422 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:02:01.208116 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:02:01.218499 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:02:01.223262 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:02:01.224408 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:02:01.327126 systemd-networkd[1354]: eth0: Gained IPv6LL Sep 13 00:02:01.328363 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 13 00:02:01.332296 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:02:01.334984 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:02:01.353247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:01.356139 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:02:01.389783 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:02:01.519256 systemd-networkd[1354]: eth1: Gained IPv6LL Sep 13 00:02:01.520032 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 13 00:02:02.138043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:02.139649 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:02:02.143301 systemd[1]: Startup finished in 768ms (kernel) + 4.692s (initrd) + 4.794s (userspace) = 10.256s. Sep 13 00:02:02.152064 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:02.702569 kubelet[1579]: E0913 00:02:02.702519 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:02.706803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:02.707126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:07.141729 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:02:07.148306 systemd[1]: Started sshd@0-188.245.230.74:22-147.75.109.163:47294.service - OpenSSH per-connection server daemon (147.75.109.163:47294). Sep 13 00:02:08.128078 sshd[1591]: Accepted publickey for core from 147.75.109.163 port 47294 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:08.130190 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:08.147592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:02:08.148138 systemd-logind[1459]: New session 1 of user core. Sep 13 00:02:08.153492 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:02:08.170404 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:02:08.184315 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:02:08.189299 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:02:08.296907 systemd[1595]: Queued start job for default target default.target. Sep 13 00:02:08.306033 systemd[1595]: Created slice app.slice - User Application Slice. Sep 13 00:02:08.306074 systemd[1595]: Reached target paths.target - Paths. Sep 13 00:02:08.306090 systemd[1595]: Reached target timers.target - Timers. Sep 13 00:02:08.307641 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:02:08.322470 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:02:08.322722 systemd[1595]: Reached target sockets.target - Sockets. Sep 13 00:02:08.322757 systemd[1595]: Reached target basic.target - Basic System. Sep 13 00:02:08.322847 systemd[1595]: Reached target default.target - Main User Target. Sep 13 00:02:08.323443 systemd[1595]: Startup finished in 126ms. Sep 13 00:02:08.323532 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:02:08.333192 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:02:09.027182 systemd[1]: Started sshd@1-188.245.230.74:22-147.75.109.163:47310.service - OpenSSH per-connection server daemon (147.75.109.163:47310). Sep 13 00:02:10.009621 sshd[1606]: Accepted publickey for core from 147.75.109.163 port 47310 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:10.011765 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:10.016667 systemd-logind[1459]: New session 2 of user core. Sep 13 00:02:10.025256 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:02:10.694577 sshd[1606]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:10.699092 systemd[1]: sshd@1-188.245.230.74:22-147.75.109.163:47310.service: Deactivated successfully. Sep 13 00:02:10.701766 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:02:10.703966 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:02:10.705730 systemd-logind[1459]: Removed session 2. Sep 13 00:02:10.864644 systemd[1]: Started sshd@2-188.245.230.74:22-147.75.109.163:40894.service - OpenSSH per-connection server daemon (147.75.109.163:40894). Sep 13 00:02:11.851130 sshd[1613]: Accepted publickey for core from 147.75.109.163 port 40894 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:11.853368 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:11.857964 systemd-logind[1459]: New session 3 of user core. Sep 13 00:02:11.866207 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:02:12.530013 sshd[1613]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:12.536013 systemd[1]: sshd@2-188.245.230.74:22-147.75.109.163:40894.service: Deactivated successfully. Sep 13 00:02:12.537839 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:02:12.538767 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:02:12.539854 systemd-logind[1459]: Removed session 3. Sep 13 00:02:12.707335 systemd[1]: Started sshd@3-188.245.230.74:22-147.75.109.163:40900.service - OpenSSH per-connection server daemon (147.75.109.163:40900). Sep 13 00:02:12.708808 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:02:12.712153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:12.822977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:12.834548 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:12.881951 kubelet[1630]: E0913 00:02:12.881871 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:12.886155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:12.886570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:13.690026 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 40900 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:13.692022 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:13.698767 systemd-logind[1459]: New session 4 of user core. Sep 13 00:02:13.705266 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:02:14.372379 sshd[1620]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:14.376818 systemd[1]: sshd@3-188.245.230.74:22-147.75.109.163:40900.service: Deactivated successfully. Sep 13 00:02:14.378383 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:02:14.379791 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:02:14.380859 systemd-logind[1459]: Removed session 4. Sep 13 00:02:14.557401 systemd[1]: Started sshd@4-188.245.230.74:22-147.75.109.163:40916.service - OpenSSH per-connection server daemon (147.75.109.163:40916). Sep 13 00:02:15.540778 sshd[1641]: Accepted publickey for core from 147.75.109.163 port 40916 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:15.542386 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:15.548096 systemd-logind[1459]: New session 5 of user core. Sep 13 00:02:15.554244 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:02:16.074296 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:02:16.074597 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:02:16.092983 sudo[1644]: pam_unix(sudo:session): session closed for user root Sep 13 00:02:16.255090 sshd[1641]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:16.260158 systemd[1]: sshd@4-188.245.230.74:22-147.75.109.163:40916.service: Deactivated successfully. Sep 13 00:02:16.261737 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:02:16.262871 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:02:16.264299 systemd-logind[1459]: Removed session 5. Sep 13 00:02:16.438338 systemd[1]: Started sshd@5-188.245.230.74:22-147.75.109.163:40918.service - OpenSSH per-connection server daemon (147.75.109.163:40918). Sep 13 00:02:17.417613 sshd[1649]: Accepted publickey for core from 147.75.109.163 port 40918 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:17.419828 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:17.425667 systemd-logind[1459]: New session 6 of user core. Sep 13 00:02:17.437154 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:02:17.941474 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:02:17.942207 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:02:17.945999 sudo[1653]: pam_unix(sudo:session): session closed for user root Sep 13 00:02:17.953322 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:02:17.953831 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:02:17.973445 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:02:17.976240 auditctl[1656]: No rules Sep 13 00:02:17.976758 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:02:17.977061 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:02:17.985447 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:02:18.011457 augenrules[1674]: No rules Sep 13 00:02:18.013991 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:02:18.015210 sudo[1652]: pam_unix(sudo:session): session closed for user root Sep 13 00:02:18.175576 sshd[1649]: pam_unix(sshd:session): session closed for user core Sep 13 00:02:18.180981 systemd[1]: sshd@5-188.245.230.74:22-147.75.109.163:40918.service: Deactivated successfully. Sep 13 00:02:18.183683 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:02:18.186618 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:02:18.188256 systemd-logind[1459]: Removed session 6. Sep 13 00:02:18.356484 systemd[1]: Started sshd@6-188.245.230.74:22-147.75.109.163:40920.service - OpenSSH per-connection server daemon (147.75.109.163:40920). Sep 13 00:02:19.327943 sshd[1682]: Accepted publickey for core from 147.75.109.163 port 40920 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:02:19.330396 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:02:19.337796 systemd-logind[1459]: New session 7 of user core. Sep 13 00:02:19.351173 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:02:19.846546 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:02:19.846952 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:02:20.143362 (dockerd)[1701]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:02:20.143396 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:02:20.382097 dockerd[1701]: time="2025-09-13T00:02:20.382006000Z" level=info msg="Starting up" Sep 13 00:02:20.453420 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport243177679-merged.mount: Deactivated successfully. Sep 13 00:02:20.476466 dockerd[1701]: time="2025-09-13T00:02:20.476162640Z" level=info msg="Loading containers: start." Sep 13 00:02:20.578958 kernel: Initializing XFRM netlink socket Sep 13 00:02:20.600641 systemd-timesyncd[1346]: Network configuration changed, trying to establish connection. Sep 13 00:02:20.659537 systemd-networkd[1354]: docker0: Link UP Sep 13 00:02:20.668036 systemd-timesyncd[1346]: Contacted time server 185.207.105.38:123 (2.flatcar.pool.ntp.org). Sep 13 00:02:20.668412 systemd-timesyncd[1346]: Initial clock synchronization to Sat 2025-09-13 00:02:20.454051 UTC. Sep 13 00:02:20.674924 dockerd[1701]: time="2025-09-13T00:02:20.674113760Z" level=info msg="Loading containers: done." Sep 13 00:02:20.694778 dockerd[1701]: time="2025-09-13T00:02:20.694701080Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:02:20.695077 dockerd[1701]: time="2025-09-13T00:02:20.694824480Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:02:20.695077 dockerd[1701]: time="2025-09-13T00:02:20.694978960Z" level=info msg="Daemon has completed initialization" Sep 13 00:02:20.728100 dockerd[1701]: time="2025-09-13T00:02:20.727820040Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:02:20.728094 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:02:21.727145 containerd[1482]: time="2025-09-13T00:02:21.726819340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:02:22.358083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2220205956.mount: Deactivated successfully. Sep 13 00:02:23.136720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:02:23.145133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:23.256189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:23.268409 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:23.311376 kubelet[1903]: E0913 00:02:23.311308 1903 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:23.317307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:23.317459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:23.720415 containerd[1482]: time="2025-09-13T00:02:23.720342882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.722723 containerd[1482]: time="2025-09-13T00:02:23.722651338Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687423" Sep 13 00:02:23.724124 containerd[1482]: time="2025-09-13T00:02:23.724081564Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.726910 containerd[1482]: time="2025-09-13T00:02:23.726846061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:23.729337 containerd[1482]: time="2025-09-13T00:02:23.728965149Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.002099576s" Sep 13 00:02:23.729337 containerd[1482]: time="2025-09-13T00:02:23.729002254Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:02:23.730844 containerd[1482]: time="2025-09-13T00:02:23.730765059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:02:25.250926 containerd[1482]: time="2025-09-13T00:02:25.249243454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:25.250926 containerd[1482]: time="2025-09-13T00:02:25.250701624Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459787" Sep 13 00:02:25.251491 containerd[1482]: time="2025-09-13T00:02:25.251455318Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:25.255440 containerd[1482]: time="2025-09-13T00:02:25.255378054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:25.257314 containerd[1482]: time="2025-09-13T00:02:25.257261462Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.526453588s" Sep 13 00:02:25.257393 containerd[1482]: time="2025-09-13T00:02:25.257312018Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:02:25.258535 containerd[1482]: time="2025-09-13T00:02:25.258508903Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:02:27.111876 containerd[1482]: time="2025-09-13T00:02:27.111784844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:27.113916 containerd[1482]: time="2025-09-13T00:02:27.113549003Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127526" Sep 13 00:02:27.115455 containerd[1482]: time="2025-09-13T00:02:27.115357900Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:27.123523 containerd[1482]: time="2025-09-13T00:02:27.122298585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:27.125134 containerd[1482]: time="2025-09-13T00:02:27.125076210Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.866425534s" Sep 13 00:02:27.125211 containerd[1482]: time="2025-09-13T00:02:27.125137625Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:02:27.125828 containerd[1482]: time="2025-09-13T00:02:27.125772326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:02:28.103235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513113245.mount: Deactivated successfully. Sep 13 00:02:28.465076 containerd[1482]: time="2025-09-13T00:02:28.464842347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:28.466881 containerd[1482]: time="2025-09-13T00:02:28.466754561Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954933" Sep 13 00:02:28.469088 containerd[1482]: time="2025-09-13T00:02:28.469022208Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:28.471460 containerd[1482]: time="2025-09-13T00:02:28.471413465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:28.472507 containerd[1482]: time="2025-09-13T00:02:28.472377092Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.346417146s" Sep 13 00:02:28.472507 containerd[1482]: time="2025-09-13T00:02:28.472416949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:02:28.474118 containerd[1482]: time="2025-09-13T00:02:28.474084476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:02:29.069715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2637000781.mount: Deactivated successfully. Sep 13 00:02:29.751101 containerd[1482]: time="2025-09-13T00:02:29.751024728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:29.752830 containerd[1482]: time="2025-09-13T00:02:29.752783361Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 13 00:02:29.754453 containerd[1482]: time="2025-09-13T00:02:29.753946152Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:29.758095 containerd[1482]: time="2025-09-13T00:02:29.758049272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:29.759617 containerd[1482]: time="2025-09-13T00:02:29.759576330Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.285445056s" Sep 13 00:02:29.759617 containerd[1482]: time="2025-09-13T00:02:29.759613387Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:02:29.760687 containerd[1482]: time="2025-09-13T00:02:29.760662431Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:02:30.259980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924826872.mount: Deactivated successfully. Sep 13 00:02:30.266425 containerd[1482]: time="2025-09-13T00:02:30.266357167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:30.267478 containerd[1482]: time="2025-09-13T00:02:30.267444406Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 13 00:02:30.268508 containerd[1482]: time="2025-09-13T00:02:30.268130215Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:30.271537 containerd[1482]: time="2025-09-13T00:02:30.271474784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:30.272791 containerd[1482]: time="2025-09-13T00:02:30.272745101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 511.870583ms" Sep 13 00:02:30.272791 containerd[1482]: time="2025-09-13T00:02:30.272783590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:02:30.273873 containerd[1482]: time="2025-09-13T00:02:30.273850832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:02:30.875945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2456535958.mount: Deactivated successfully. Sep 13 00:02:33.233950 containerd[1482]: time="2025-09-13T00:02:33.233702523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:33.235916 containerd[1482]: time="2025-09-13T00:02:33.235510505Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 13 00:02:33.237677 containerd[1482]: time="2025-09-13T00:02:33.237581943Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:33.243613 containerd[1482]: time="2025-09-13T00:02:33.243550545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:33.246210 containerd[1482]: time="2025-09-13T00:02:33.245394135Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.971414875s" Sep 13 00:02:33.246210 containerd[1482]: time="2025-09-13T00:02:33.245432209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:02:33.515474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:02:33.525248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:33.629150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:33.632615 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:02:33.675680 kubelet[2056]: E0913 00:02:33.675598 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:02:33.678191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:02:33.678461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:02:37.973417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:37.982648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:38.019817 systemd[1]: Reloading requested from client PID 2085 ('systemctl') (unit session-7.scope)... Sep 13 00:02:38.019833 systemd[1]: Reloading... Sep 13 00:02:38.137926 zram_generator::config[2137]: No configuration found. Sep 13 00:02:38.221662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:02:38.291423 systemd[1]: Reloading finished in 271 ms. Sep 13 00:02:38.345198 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:38.350345 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:02:38.350681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:38.359283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:38.469183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:38.470230 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:02:38.523301 kubelet[2175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:02:38.523630 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:02:38.523673 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:02:38.523887 kubelet[2175]: I0913 00:02:38.523853 2175 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:02:39.397765 kubelet[2175]: I0913 00:02:39.397706 2175 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:02:39.397765 kubelet[2175]: I0913 00:02:39.397744 2175 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:02:39.398140 kubelet[2175]: I0913 00:02:39.398079 2175 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:02:39.425093 kubelet[2175]: E0913 00:02:39.424319 2175 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.230.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:39.425745 kubelet[2175]: I0913 00:02:39.425707 2175 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:02:39.435037 kubelet[2175]: E0913 00:02:39.435002 2175 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:02:39.435199 kubelet[2175]: I0913 00:02:39.435185 2175 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:02:39.438875 kubelet[2175]: I0913 00:02:39.438851 2175 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:02:39.440322 kubelet[2175]: I0913 00:02:39.440283 2175 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:02:39.440600 kubelet[2175]: I0913 00:02:39.440569 2175 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:02:39.440845 kubelet[2175]: I0913 00:02:39.440656 2175 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-d78c7abf5e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:02:39.441064 kubelet[2175]: I0913 00:02:39.441049 2175 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:02:39.441118 kubelet[2175]: I0913 00:02:39.441110 2175 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:02:39.441428 kubelet[2175]: I0913 00:02:39.441411 2175 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:02:39.444365 kubelet[2175]: I0913 00:02:39.444164 2175 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:02:39.444365 kubelet[2175]: I0913 00:02:39.444194 2175 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:02:39.444365 kubelet[2175]: I0913 00:02:39.444215 2175 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:02:39.444365 kubelet[2175]: I0913 00:02:39.444290 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:02:39.446050 kubelet[2175]: W0913 00:02:39.445979 2175 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.230.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-d78c7abf5e&limit=500&resourceVersion=0": dial tcp 188.245.230.74:6443: connect: connection refused Sep 13 00:02:39.446137 kubelet[2175]: E0913 00:02:39.446072 2175 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.230.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-d78c7abf5e&limit=500&resourceVersion=0\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:39.449230 kubelet[2175]: W0913 00:02:39.449064 2175 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.230.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.230.74:6443: connect: connection refused Sep 13 00:02:39.449230 kubelet[2175]: E0913 00:02:39.449123 2175 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.230.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:39.450493 kubelet[2175]: I0913 00:02:39.449637 2175 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:02:39.450730 kubelet[2175]: I0913 00:02:39.450676 2175 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:02:39.450913 kubelet[2175]: W0913 00:02:39.450792 2175 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:02:39.452172 kubelet[2175]: I0913 00:02:39.452140 2175 server.go:1274] "Started kubelet" Sep 13 00:02:39.453475 kubelet[2175]: I0913 00:02:39.453436 2175 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:02:39.454946 kubelet[2175]: I0913 00:02:39.454756 2175 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:02:39.455083 kubelet[2175]: I0913 00:02:39.455032 2175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:02:39.455405 kubelet[2175]: I0913 00:02:39.455377 2175 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:02:39.456944 kubelet[2175]: E0913 00:02:39.455531 2175 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.230.74:6443/api/v1/namespaces/default/events\": dial tcp 188.245.230.74:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-d78c7abf5e.1864aea09e1e3fa5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-d78c7abf5e,UID:ci-4081-3-5-n-d78c7abf5e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-d78c7abf5e,},FirstTimestamp:2025-09-13 00:02:39.452118949 +0000 UTC m=+0.977394262,LastTimestamp:2025-09-13 00:02:39.452118949 +0000 UTC m=+0.977394262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-d78c7abf5e,}" Sep 13 00:02:39.459020 kubelet[2175]: I0913 00:02:39.458045 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:02:39.459020 kubelet[2175]: I0913 00:02:39.458169 2175 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:02:39.460700 kubelet[2175]: I0913 00:02:39.460677 2175 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:02:39.461220 kubelet[2175]: E0913 00:02:39.461198 2175 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-d78c7abf5e\" not found" Sep 13 00:02:39.462111 kubelet[2175]: I0913 00:02:39.462093 2175 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:02:39.462326 kubelet[2175]: I0913 00:02:39.462315 2175 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:02:39.463958 kubelet[2175]: W0913 00:02:39.463854 2175 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.230.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.230.74:6443: connect: connection refused Sep 13 00:02:39.463958 kubelet[2175]: E0913 00:02:39.463923 2175 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.230.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:39.464055 kubelet[2175]: E0913 00:02:39.464006 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.230.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-d78c7abf5e?timeout=10s\": dial tcp 188.245.230.74:6443: connect: connection refused" interval="200ms" Sep 13 00:02:39.464651 kubelet[2175]: I0913 00:02:39.464591 2175 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:02:39.464709 kubelet[2175]: I0913 00:02:39.464692 2175 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:02:39.467308 kubelet[2175]: E0913 00:02:39.467285 2175 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:02:39.468917 kubelet[2175]: I0913 00:02:39.468851 2175 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:02:39.483149 kubelet[2175]: I0913 00:02:39.483103 2175 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:02:39.483149 kubelet[2175]: I0913 00:02:39.483136 2175 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:02:39.483149 kubelet[2175]: I0913 00:02:39.483154 2175 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:02:39.486977 kubelet[2175]: I0913 00:02:39.486401 2175 policy_none.go:49] "None policy: Start" Sep 13 00:02:39.487310 kubelet[2175]: I0913 00:02:39.487284 2175 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:02:39.487449 kubelet[2175]: I0913 00:02:39.487430 2175 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:02:39.488575 kubelet[2175]: I0913 00:02:39.488551 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:02:39.490234 kubelet[2175]: I0913 00:02:39.490169 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:02:39.490335 kubelet[2175]: I0913 00:02:39.490315 2175 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:02:39.490368 kubelet[2175]: I0913 00:02:39.490342 2175 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:02:39.491600 kubelet[2175]: E0913 00:02:39.491322 2175 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:02:39.492114 kubelet[2175]: W0913 00:02:39.491965 2175 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.230.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.230.74:6443: connect: connection refused Sep 13 00:02:39.492766 kubelet[2175]: E0913 00:02:39.492240 2175 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.230.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:39.498284 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:02:39.509933 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:02:39.513408 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:02:39.523932 kubelet[2175]: I0913 00:02:39.523022 2175 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:02:39.523932 kubelet[2175]: I0913 00:02:39.523393 2175 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:02:39.523932 kubelet[2175]: I0913 00:02:39.523412 2175 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:02:39.523932 kubelet[2175]: I0913 00:02:39.523791 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:02:39.528406 kubelet[2175]: E0913 00:02:39.528386 2175 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-d78c7abf5e\" not found" Sep 13 00:02:39.607505 systemd[1]: Created slice kubepods-burstable-pod56023b2e468fed450a14df8478d6a818.slice - libcontainer container kubepods-burstable-pod56023b2e468fed450a14df8478d6a818.slice. Sep 13 00:02:39.622371 systemd[1]: Created slice kubepods-burstable-podb372e79d1aa1f30962ab5e7ff997b3ac.slice - libcontainer container kubepods-burstable-podb372e79d1aa1f30962ab5e7ff997b3ac.slice. Sep 13 00:02:39.627133 kubelet[2175]: I0913 00:02:39.626669 2175 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.627273 kubelet[2175]: E0913 00:02:39.627225 2175 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.230.74:6443/api/v1/nodes\": dial tcp 188.245.230.74:6443: connect: connection refused" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.634243 systemd[1]: Created slice kubepods-burstable-pod60536302bda123ccf12ab894a6a7b66d.slice - libcontainer container kubepods-burstable-pod60536302bda123ccf12ab894a6a7b66d.slice. Sep 13 00:02:39.663552 kubelet[2175]: I0913 00:02:39.662958 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56023b2e468fed450a14df8478d6a818-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-d78c7abf5e\" (UID: \"56023b2e468fed450a14df8478d6a818\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.663803 kubelet[2175]: I0913 00:02:39.663773 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.665111 kubelet[2175]: E0913 00:02:39.664638 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.230.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-d78c7abf5e?timeout=10s\": dial tcp 188.245.230.74:6443: connect: connection refused" interval="400ms" Sep 13 00:02:39.665181 kubelet[2175]: I0913 00:02:39.665116 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.665247 kubelet[2175]: I0913 00:02:39.665179 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60536302bda123ccf12ab894a6a7b66d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-d78c7abf5e\" (UID: \"60536302bda123ccf12ab894a6a7b66d\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.665247 kubelet[2175]: I0913 00:02:39.665217 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56023b2e468fed450a14df8478d6a818-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-d78c7abf5e\" (UID: \"56023b2e468fed450a14df8478d6a818\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.665327 kubelet[2175]: I0913 00:02:39.665248 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56023b2e468fed450a14df8478d6a818-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-d78c7abf5e\" (UID: \"56023b2e468fed450a14df8478d6a818\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.665327 kubelet[2175]: I0913 00:02:39.665281 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.665327 kubelet[2175]: I0913 00:02:39.665317 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.665434 kubelet[2175]: I0913 00:02:39.665348 2175 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.830148 kubelet[2175]: I0913 00:02:39.830044 2175 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.830689 kubelet[2175]: E0913 00:02:39.830631 2175 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.230.74:6443/api/v1/nodes\": dial tcp 188.245.230.74:6443: connect: connection refused" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:39.919938 containerd[1482]: time="2025-09-13T00:02:39.919732459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-d78c7abf5e,Uid:56023b2e468fed450a14df8478d6a818,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:39.926666 containerd[1482]: time="2025-09-13T00:02:39.926622546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-d78c7abf5e,Uid:b372e79d1aa1f30962ab5e7ff997b3ac,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:39.938019 containerd[1482]: time="2025-09-13T00:02:39.937939680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-d78c7abf5e,Uid:60536302bda123ccf12ab894a6a7b66d,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:40.065969 kubelet[2175]: E0913 00:02:40.065795 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.230.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-d78c7abf5e?timeout=10s\": dial tcp 188.245.230.74:6443: connect: connection refused" interval="800ms" Sep 13 00:02:40.233188 kubelet[2175]: I0913 00:02:40.233021 2175 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:40.233801 kubelet[2175]: E0913 00:02:40.233764 2175 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.230.74:6443/api/v1/nodes\": dial tcp 188.245.230.74:6443: connect: connection refused" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:40.444130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183519318.mount: Deactivated successfully. Sep 13 00:02:40.450951 containerd[1482]: time="2025-09-13T00:02:40.450885272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:40.452619 containerd[1482]: time="2025-09-13T00:02:40.452585433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:02:40.453187 containerd[1482]: time="2025-09-13T00:02:40.453153989Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:40.453951 containerd[1482]: time="2025-09-13T00:02:40.453915497Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:40.455733 containerd[1482]: time="2025-09-13T00:02:40.455605640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 13 00:02:40.455910 containerd[1482]: time="2025-09-13T00:02:40.455857227Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:40.456764 containerd[1482]: time="2025-09-13T00:02:40.456736247Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:02:40.460815 containerd[1482]: time="2025-09-13T00:02:40.460727438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:02:40.463284 containerd[1482]: time="2025-09-13T00:02:40.463226069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.390617ms" Sep 13 00:02:40.473085 containerd[1482]: time="2025-09-13T00:02:40.471872806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.951174ms" Sep 13 00:02:40.473085 containerd[1482]: time="2025-09-13T00:02:40.472425476Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.39258ms" Sep 13 00:02:40.561032 kubelet[2175]: W0913 00:02:40.560990 2175 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.230.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.230.74:6443: connect: connection refused Sep 13 00:02:40.562137 kubelet[2175]: E0913 00:02:40.562112 2175 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.230.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:40.598996 containerd[1482]: time="2025-09-13T00:02:40.598685973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:40.598996 containerd[1482]: time="2025-09-13T00:02:40.598755666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:40.598996 containerd[1482]: time="2025-09-13T00:02:40.598782489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:40.599313 containerd[1482]: time="2025-09-13T00:02:40.598879963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:40.602512 containerd[1482]: time="2025-09-13T00:02:40.601290181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:40.602512 containerd[1482]: time="2025-09-13T00:02:40.601357199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:40.602512 containerd[1482]: time="2025-09-13T00:02:40.601377796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:40.602512 containerd[1482]: time="2025-09-13T00:02:40.601461379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:40.603530 containerd[1482]: time="2025-09-13T00:02:40.603337088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:40.603530 containerd[1482]: time="2025-09-13T00:02:40.603395445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:40.603530 containerd[1482]: time="2025-09-13T00:02:40.603421110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:40.603847 containerd[1482]: time="2025-09-13T00:02:40.603600810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:40.624095 systemd[1]: Started cri-containerd-6642337d1303a53e3a0e7d6286b91c2d62d4c93d9ecb7baaeb7dd523a154721c.scope - libcontainer container 6642337d1303a53e3a0e7d6286b91c2d62d4c93d9ecb7baaeb7dd523a154721c. Sep 13 00:02:40.627744 systemd[1]: Started cri-containerd-5ab2dd2378d293f378d9581591c81cf6df51b435b89cfa2887c9a619ad311a67.scope - libcontainer container 5ab2dd2378d293f378d9581591c81cf6df51b435b89cfa2887c9a619ad311a67. Sep 13 00:02:40.649221 systemd[1]: Started cri-containerd-96f5bc9b8ec39c2db86e89199c09ee478157bb24be8022207333d2ce093e3de1.scope - libcontainer container 96f5bc9b8ec39c2db86e89199c09ee478157bb24be8022207333d2ce093e3de1. Sep 13 00:02:40.685152 kubelet[2175]: W0913 00:02:40.685096 2175 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.230.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.230.74:6443: connect: connection refused Sep 13 00:02:40.685768 kubelet[2175]: E0913 00:02:40.685493 2175 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.230.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:40.699493 containerd[1482]: time="2025-09-13T00:02:40.698440857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-d78c7abf5e,Uid:60536302bda123ccf12ab894a6a7b66d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ab2dd2378d293f378d9581591c81cf6df51b435b89cfa2887c9a619ad311a67\"" Sep 13 00:02:40.705240 containerd[1482]: time="2025-09-13T00:02:40.705148099Z" level=info msg="CreateContainer within sandbox \"5ab2dd2378d293f378d9581591c81cf6df51b435b89cfa2887c9a619ad311a67\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:02:40.708170 containerd[1482]: time="2025-09-13T00:02:40.708099851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-d78c7abf5e,Uid:b372e79d1aa1f30962ab5e7ff997b3ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"6642337d1303a53e3a0e7d6286b91c2d62d4c93d9ecb7baaeb7dd523a154721c\"" Sep 13 00:02:40.716116 containerd[1482]: time="2025-09-13T00:02:40.716062556Z" level=info msg="CreateContainer within sandbox \"6642337d1303a53e3a0e7d6286b91c2d62d4c93d9ecb7baaeb7dd523a154721c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:02:40.717735 containerd[1482]: time="2025-09-13T00:02:40.717696577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-d78c7abf5e,Uid:56023b2e468fed450a14df8478d6a818,Namespace:kube-system,Attempt:0,} returns sandbox id \"96f5bc9b8ec39c2db86e89199c09ee478157bb24be8022207333d2ce093e3de1\"" Sep 13 00:02:40.720419 containerd[1482]: time="2025-09-13T00:02:40.720376664Z" level=info msg="CreateContainer within sandbox \"96f5bc9b8ec39c2db86e89199c09ee478157bb24be8022207333d2ce093e3de1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:02:40.733011 containerd[1482]: time="2025-09-13T00:02:40.732870298Z" level=info msg="CreateContainer within sandbox \"5ab2dd2378d293f378d9581591c81cf6df51b435b89cfa2887c9a619ad311a67\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e\"" Sep 13 00:02:40.734759 containerd[1482]: time="2025-09-13T00:02:40.733648850Z" level=info msg="StartContainer for \"7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e\"" Sep 13 00:02:40.739446 containerd[1482]: time="2025-09-13T00:02:40.739353854Z" level=info msg="CreateContainer within sandbox \"6642337d1303a53e3a0e7d6286b91c2d62d4c93d9ecb7baaeb7dd523a154721c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081\"" Sep 13 00:02:40.739958 containerd[1482]: time="2025-09-13T00:02:40.739937658Z" level=info msg="StartContainer for \"03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081\"" Sep 13 00:02:40.748331 containerd[1482]: time="2025-09-13T00:02:40.748288062Z" level=info msg="CreateContainer within sandbox \"96f5bc9b8ec39c2db86e89199c09ee478157bb24be8022207333d2ce093e3de1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0b9966382137ace64353888e4eabade4eb654f23dd1af2a3f28c17a83614b74f\"" Sep 13 00:02:40.749584 containerd[1482]: time="2025-09-13T00:02:40.749080585Z" level=info msg="StartContainer for \"0b9966382137ace64353888e4eabade4eb654f23dd1af2a3f28c17a83614b74f\"" Sep 13 00:02:40.769102 systemd[1]: Started cri-containerd-03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081.scope - libcontainer container 03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081. Sep 13 00:02:40.770933 systemd[1]: Started cri-containerd-7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e.scope - libcontainer container 7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e. Sep 13 00:02:40.798147 systemd[1]: Started cri-containerd-0b9966382137ace64353888e4eabade4eb654f23dd1af2a3f28c17a83614b74f.scope - libcontainer container 0b9966382137ace64353888e4eabade4eb654f23dd1af2a3f28c17a83614b74f. Sep 13 00:02:40.831592 containerd[1482]: time="2025-09-13T00:02:40.830745160Z" level=info msg="StartContainer for \"7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e\" returns successfully" Sep 13 00:02:40.837026 containerd[1482]: time="2025-09-13T00:02:40.836973816Z" level=info msg="StartContainer for \"03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081\" returns successfully" Sep 13 00:02:40.855332 containerd[1482]: time="2025-09-13T00:02:40.855286852Z" level=info msg="StartContainer for \"0b9966382137ace64353888e4eabade4eb654f23dd1af2a3f28c17a83614b74f\" returns successfully" Sep 13 00:02:40.867163 kubelet[2175]: E0913 00:02:40.867060 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.230.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-d78c7abf5e?timeout=10s\": dial tcp 188.245.230.74:6443: connect: connection refused" interval="1.6s" Sep 13 00:02:40.930910 kubelet[2175]: W0913 00:02:40.930842 2175 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.230.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-d78c7abf5e&limit=500&resourceVersion=0": dial tcp 188.245.230.74:6443: connect: connection refused Sep 13 00:02:40.931086 kubelet[2175]: E0913 00:02:40.930932 2175 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.230.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-d78c7abf5e&limit=500&resourceVersion=0\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:40.954009 kubelet[2175]: W0913 00:02:40.953949 2175 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.230.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.230.74:6443: connect: connection refused Sep 13 00:02:40.954150 kubelet[2175]: E0913 00:02:40.954017 2175 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.230.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.230.74:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:02:41.037336 kubelet[2175]: I0913 00:02:41.037290 2175 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:43.035645 kubelet[2175]: E0913 00:02:43.035600 2175 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-d78c7abf5e\" not found" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:43.085412 kubelet[2175]: I0913 00:02:43.085369 2175 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:43.085412 kubelet[2175]: E0913 00:02:43.085414 2175 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-d78c7abf5e\": node \"ci-4081-3-5-n-d78c7abf5e\" not found" Sep 13 00:02:43.448928 kubelet[2175]: I0913 00:02:43.448085 2175 apiserver.go:52] "Watching apiserver" Sep 13 00:02:43.463066 kubelet[2175]: I0913 00:02:43.462978 2175 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:02:45.437276 systemd[1]: Reloading requested from client PID 2452 ('systemctl') (unit session-7.scope)... Sep 13 00:02:45.437652 systemd[1]: Reloading... Sep 13 00:02:45.552977 zram_generator::config[2492]: No configuration found. Sep 13 00:02:45.649196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:02:45.735318 systemd[1]: Reloading finished in 297 ms. Sep 13 00:02:45.777573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:45.795829 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:02:45.797962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:45.798053 systemd[1]: kubelet.service: Consumed 1.356s CPU time, 128.9M memory peak, 0B memory swap peak. Sep 13 00:02:45.804366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:02:45.888483 update_engine[1461]: I20250913 00:02:45.887457 1461 update_attempter.cc:509] Updating boot flags... Sep 13 00:02:45.954957 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2545) Sep 13 00:02:45.964395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:02:45.980516 (kubelet)[2551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:02:46.070179 kubelet[2551]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:02:46.071917 kubelet[2551]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:02:46.071917 kubelet[2551]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:02:46.071917 kubelet[2551]: I0913 00:02:46.071063 2551 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:02:46.083245 kubelet[2551]: I0913 00:02:46.081602 2551 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:02:46.083477 kubelet[2551]: I0913 00:02:46.083414 2551 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:02:46.083839 kubelet[2551]: I0913 00:02:46.083817 2551 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:02:46.086268 kubelet[2551]: I0913 00:02:46.086159 2551 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:02:46.093289 kubelet[2551]: I0913 00:02:46.093240 2551 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:02:46.099088 kubelet[2551]: E0913 00:02:46.099034 2551 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:02:46.099088 kubelet[2551]: I0913 00:02:46.099086 2551 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:02:46.102805 kubelet[2551]: I0913 00:02:46.102764 2551 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:02:46.103018 kubelet[2551]: I0913 00:02:46.102883 2551 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:02:46.103018 kubelet[2551]: I0913 00:02:46.102999 2551 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:02:46.103500 kubelet[2551]: I0913 00:02:46.103027 2551 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-d78c7abf5e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:02:46.103500 kubelet[2551]: I0913 00:02:46.103211 2551 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:02:46.103500 kubelet[2551]: I0913 00:02:46.103222 2551 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:02:46.103500 kubelet[2551]: I0913 00:02:46.103260 2551 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:02:46.103500 kubelet[2551]: I0913 00:02:46.103362 2551 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:02:46.103814 kubelet[2551]: I0913 00:02:46.103372 2551 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:02:46.103814 kubelet[2551]: I0913 00:02:46.103391 2551 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:02:46.103814 kubelet[2551]: I0913 00:02:46.103404 2551 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:02:46.110153 kubelet[2551]: I0913 00:02:46.108243 2551 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:02:46.110153 kubelet[2551]: I0913 00:02:46.108832 2551 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:02:46.113264 kubelet[2551]: I0913 00:02:46.113242 2551 server.go:1274] "Started kubelet" Sep 13 00:02:46.118857 kubelet[2551]: I0913 00:02:46.118795 2551 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:02:46.121022 kubelet[2551]: I0913 00:02:46.120993 2551 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:02:46.124523 kubelet[2551]: I0913 00:02:46.124494 2551 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:02:46.126568 kubelet[2551]: I0913 00:02:46.126511 2551 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:02:46.127010 kubelet[2551]: I0913 00:02:46.126994 2551 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:02:46.136137 kubelet[2551]: I0913 00:02:46.136087 2551 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:02:46.137630 kubelet[2551]: I0913 00:02:46.137605 2551 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:02:46.137840 kubelet[2551]: E0913 00:02:46.137820 2551 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-d78c7abf5e\" not found" Sep 13 00:02:46.146293 kubelet[2551]: I0913 00:02:46.146257 2551 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:02:46.146459 kubelet[2551]: I0913 00:02:46.146424 2551 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:02:46.153250 kubelet[2551]: I0913 00:02:46.153176 2551 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:02:46.155599 kubelet[2551]: I0913 00:02:46.155572 2551 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:02:46.155948 kubelet[2551]: I0913 00:02:46.155913 2551 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:02:46.157214 kubelet[2551]: I0913 00:02:46.157121 2551 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:02:46.157214 kubelet[2551]: I0913 00:02:46.157159 2551 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:02:46.157214 kubelet[2551]: I0913 00:02:46.157178 2551 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:02:46.157350 kubelet[2551]: E0913 00:02:46.157220 2551 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:02:46.166451 kubelet[2551]: I0913 00:02:46.166417 2551 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:02:46.172200 kubelet[2551]: E0913 00:02:46.172155 2551 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:02:46.225474 kubelet[2551]: I0913 00:02:46.225449 2551 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:02:46.225766 kubelet[2551]: I0913 00:02:46.225654 2551 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:02:46.225766 kubelet[2551]: I0913 00:02:46.225676 2551 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:02:46.226226 kubelet[2551]: I0913 00:02:46.226074 2551 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:02:46.226226 kubelet[2551]: I0913 00:02:46.226092 2551 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:02:46.226226 kubelet[2551]: I0913 00:02:46.226110 2551 policy_none.go:49] "None policy: Start" Sep 13 00:02:46.227309 kubelet[2551]: I0913 00:02:46.227021 2551 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:02:46.227309 kubelet[2551]: I0913 00:02:46.227046 2551 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:02:46.227309 kubelet[2551]: I0913 00:02:46.227185 2551 state_mem.go:75] "Updated machine memory state" Sep 13 00:02:46.235214 kubelet[2551]: I0913 00:02:46.234369 2551 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:02:46.235214 kubelet[2551]: I0913 00:02:46.234543 2551 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:02:46.235214 kubelet[2551]: I0913 00:02:46.234554 2551 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:02:46.235214 kubelet[2551]: I0913 00:02:46.235118 2551 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:02:46.341363 kubelet[2551]: I0913 00:02:46.339917 2551 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349088 kubelet[2551]: I0913 00:02:46.349035 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56023b2e468fed450a14df8478d6a818-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-d78c7abf5e\" (UID: \"56023b2e468fed450a14df8478d6a818\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349088 kubelet[2551]: I0913 00:02:46.349092 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56023b2e468fed450a14df8478d6a818-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-d78c7abf5e\" (UID: \"56023b2e468fed450a14df8478d6a818\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349250 kubelet[2551]: I0913 00:02:46.349118 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349250 kubelet[2551]: I0913 00:02:46.349145 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60536302bda123ccf12ab894a6a7b66d-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-d78c7abf5e\" (UID: \"60536302bda123ccf12ab894a6a7b66d\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349472 kubelet[2551]: I0913 00:02:46.349167 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56023b2e468fed450a14df8478d6a818-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-d78c7abf5e\" (UID: \"56023b2e468fed450a14df8478d6a818\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349472 kubelet[2551]: I0913 00:02:46.349468 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349550 kubelet[2551]: I0913 00:02:46.349488 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349550 kubelet[2551]: I0913 00:02:46.349506 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.349550 kubelet[2551]: I0913 00:02:46.349523 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b372e79d1aa1f30962ab5e7ff997b3ac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-d78c7abf5e\" (UID: \"b372e79d1aa1f30962ab5e7ff997b3ac\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.351554 kubelet[2551]: I0913 00:02:46.351527 2551 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:46.351637 kubelet[2551]: I0913 00:02:46.351620 2551 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:02:47.105975 kubelet[2551]: I0913 00:02:47.105924 2551 apiserver.go:52] "Watching apiserver" Sep 13 00:02:47.146728 kubelet[2551]: I0913 00:02:47.146672 2551 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:02:47.149485 kubelet[2551]: I0913 00:02:47.149252 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-d78c7abf5e" podStartSLOduration=1.149236218 podStartE2EDuration="1.149236218s" podCreationTimestamp="2025-09-13 00:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:47.147693978 +0000 UTC m=+1.154184397" watchObservedRunningTime="2025-09-13 00:02:47.149236218 +0000 UTC m=+1.155726637" Sep 13 00:02:47.177876 kubelet[2551]: I0913 00:02:47.177774 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-d78c7abf5e" podStartSLOduration=1.177748188 podStartE2EDuration="1.177748188s" podCreationTimestamp="2025-09-13 00:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:47.158537137 +0000 UTC m=+1.165027636" watchObservedRunningTime="2025-09-13 00:02:47.177748188 +0000 UTC m=+1.184238607" Sep 13 00:02:47.216605 kubelet[2551]: I0913 00:02:47.216482 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-d78c7abf5e" podStartSLOduration=1.216464247 podStartE2EDuration="1.216464247s" podCreationTimestamp="2025-09-13 00:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:47.178679015 +0000 UTC m=+1.185169394" watchObservedRunningTime="2025-09-13 00:02:47.216464247 +0000 UTC m=+1.222954666" Sep 13 00:02:50.540793 kubelet[2551]: I0913 00:02:50.540749 2551 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:02:50.542046 containerd[1482]: time="2025-09-13T00:02:50.541953760Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:02:50.542891 kubelet[2551]: I0913 00:02:50.542755 2551 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:02:51.559123 systemd[1]: Created slice kubepods-besteffort-podaf70af50_db93_47a1_a5d3_0c029f681f7c.slice - libcontainer container kubepods-besteffort-podaf70af50_db93_47a1_a5d3_0c029f681f7c.slice. Sep 13 00:02:51.581616 kubelet[2551]: I0913 00:02:51.581533 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af70af50-db93-47a1-a5d3-0c029f681f7c-kube-proxy\") pod \"kube-proxy-m74rg\" (UID: \"af70af50-db93-47a1-a5d3-0c029f681f7c\") " pod="kube-system/kube-proxy-m74rg" Sep 13 00:02:51.581616 kubelet[2551]: I0913 00:02:51.581613 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af70af50-db93-47a1-a5d3-0c029f681f7c-xtables-lock\") pod \"kube-proxy-m74rg\" (UID: \"af70af50-db93-47a1-a5d3-0c029f681f7c\") " pod="kube-system/kube-proxy-m74rg" Sep 13 00:02:51.582020 kubelet[2551]: I0913 00:02:51.581638 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwxx\" (UniqueName: \"kubernetes.io/projected/af70af50-db93-47a1-a5d3-0c029f681f7c-kube-api-access-7bwxx\") pod \"kube-proxy-m74rg\" (UID: \"af70af50-db93-47a1-a5d3-0c029f681f7c\") " pod="kube-system/kube-proxy-m74rg" Sep 13 00:02:51.582020 kubelet[2551]: I0913 00:02:51.581666 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af70af50-db93-47a1-a5d3-0c029f681f7c-lib-modules\") pod \"kube-proxy-m74rg\" (UID: \"af70af50-db93-47a1-a5d3-0c029f681f7c\") " pod="kube-system/kube-proxy-m74rg" Sep 13 00:02:51.679033 systemd[1]: Created slice kubepods-besteffort-pod5b23ad63_bf1e_4d40_aa53_8273415a8e72.slice - libcontainer container kubepods-besteffort-pod5b23ad63_bf1e_4d40_aa53_8273415a8e72.slice. Sep 13 00:02:51.684809 kubelet[2551]: I0913 00:02:51.684764 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5b23ad63-bf1e-4d40-aa53-8273415a8e72-var-lib-calico\") pod \"tigera-operator-58fc44c59b-p6xts\" (UID: \"5b23ad63-bf1e-4d40-aa53-8273415a8e72\") " pod="tigera-operator/tigera-operator-58fc44c59b-p6xts" Sep 13 00:02:51.684945 kubelet[2551]: I0913 00:02:51.684818 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5hwm\" (UniqueName: \"kubernetes.io/projected/5b23ad63-bf1e-4d40-aa53-8273415a8e72-kube-api-access-q5hwm\") pod \"tigera-operator-58fc44c59b-p6xts\" (UID: \"5b23ad63-bf1e-4d40-aa53-8273415a8e72\") " pod="tigera-operator/tigera-operator-58fc44c59b-p6xts" Sep 13 00:02:51.869089 containerd[1482]: time="2025-09-13T00:02:51.868345455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m74rg,Uid:af70af50-db93-47a1-a5d3-0c029f681f7c,Namespace:kube-system,Attempt:0,}" Sep 13 00:02:51.896370 containerd[1482]: time="2025-09-13T00:02:51.895646454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:51.896370 containerd[1482]: time="2025-09-13T00:02:51.895706384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:51.896370 containerd[1482]: time="2025-09-13T00:02:51.895722777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:51.896370 containerd[1482]: time="2025-09-13T00:02:51.896185072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:51.920183 systemd[1]: Started cri-containerd-5d84a39071c2be5b40e1a3d197a1433f526e4924d7a7acca5eb5409b270ae729.scope - libcontainer container 5d84a39071c2be5b40e1a3d197a1433f526e4924d7a7acca5eb5409b270ae729. Sep 13 00:02:51.944620 containerd[1482]: time="2025-09-13T00:02:51.944579010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m74rg,Uid:af70af50-db93-47a1-a5d3-0c029f681f7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d84a39071c2be5b40e1a3d197a1433f526e4924d7a7acca5eb5409b270ae729\"" Sep 13 00:02:51.948643 containerd[1482]: time="2025-09-13T00:02:51.948547479Z" level=info msg="CreateContainer within sandbox \"5d84a39071c2be5b40e1a3d197a1433f526e4924d7a7acca5eb5409b270ae729\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:02:51.963259 containerd[1482]: time="2025-09-13T00:02:51.963163809Z" level=info msg="CreateContainer within sandbox \"5d84a39071c2be5b40e1a3d197a1433f526e4924d7a7acca5eb5409b270ae729\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"718a04beec771cc4db5056a31d71f4f491d3eb84e4679bb717efc41b7283268a\"" Sep 13 00:02:51.964106 containerd[1482]: time="2025-09-13T00:02:51.963999482Z" level=info msg="StartContainer for \"718a04beec771cc4db5056a31d71f4f491d3eb84e4679bb717efc41b7283268a\"" Sep 13 00:02:51.986892 containerd[1482]: time="2025-09-13T00:02:51.986845968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-p6xts,Uid:5b23ad63-bf1e-4d40-aa53-8273415a8e72,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:02:51.992078 systemd[1]: Started cri-containerd-718a04beec771cc4db5056a31d71f4f491d3eb84e4679bb717efc41b7283268a.scope - libcontainer container 718a04beec771cc4db5056a31d71f4f491d3eb84e4679bb717efc41b7283268a. Sep 13 00:02:52.025855 containerd[1482]: time="2025-09-13T00:02:52.025127420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:02:52.025855 containerd[1482]: time="2025-09-13T00:02:52.025175279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:02:52.025855 containerd[1482]: time="2025-09-13T00:02:52.025185115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:52.025855 containerd[1482]: time="2025-09-13T00:02:52.025297867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:02:52.030328 containerd[1482]: time="2025-09-13T00:02:52.030287383Z" level=info msg="StartContainer for \"718a04beec771cc4db5056a31d71f4f491d3eb84e4679bb717efc41b7283268a\" returns successfully" Sep 13 00:02:52.048102 systemd[1]: Started cri-containerd-b0b06f04c1bfd023914e4dd6beaceb043371f01c4e1bda0fa7c227afc9f4c9b5.scope - libcontainer container b0b06f04c1bfd023914e4dd6beaceb043371f01c4e1bda0fa7c227afc9f4c9b5. Sep 13 00:02:52.089144 containerd[1482]: time="2025-09-13T00:02:52.089059808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-p6xts,Uid:5b23ad63-bf1e-4d40-aa53-8273415a8e72,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b0b06f04c1bfd023914e4dd6beaceb043371f01c4e1bda0fa7c227afc9f4c9b5\"" Sep 13 00:02:52.092340 containerd[1482]: time="2025-09-13T00:02:52.092298030Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:02:52.378340 kubelet[2551]: I0913 00:02:52.377882 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m74rg" podStartSLOduration=1.377857526 podStartE2EDuration="1.377857526s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:02:52.23051584 +0000 UTC m=+6.237006259" watchObservedRunningTime="2025-09-13 00:02:52.377857526 +0000 UTC m=+6.384347945" Sep 13 00:02:52.714501 systemd[1]: run-containerd-runc-k8s.io-5d84a39071c2be5b40e1a3d197a1433f526e4924d7a7acca5eb5409b270ae729-runc.cSC4CE.mount: Deactivated successfully. Sep 13 00:02:53.983289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832905076.mount: Deactivated successfully. Sep 13 00:02:54.402586 containerd[1482]: time="2025-09-13T00:02:54.401511704Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:54.402586 containerd[1482]: time="2025-09-13T00:02:54.402437269Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 13 00:02:54.403268 containerd[1482]: time="2025-09-13T00:02:54.403238977Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:54.408884 containerd[1482]: time="2025-09-13T00:02:54.408836814Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:02:54.410092 containerd[1482]: time="2025-09-13T00:02:54.410059420Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.317714643s" Sep 13 00:02:54.410203 containerd[1482]: time="2025-09-13T00:02:54.410187317Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:02:54.414609 containerd[1482]: time="2025-09-13T00:02:54.414581152Z" level=info msg="CreateContainer within sandbox \"b0b06f04c1bfd023914e4dd6beaceb043371f01c4e1bda0fa7c227afc9f4c9b5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:02:54.436808 containerd[1482]: time="2025-09-13T00:02:54.436744470Z" level=info msg="CreateContainer within sandbox \"b0b06f04c1bfd023914e4dd6beaceb043371f01c4e1bda0fa7c227afc9f4c9b5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701\"" Sep 13 00:02:54.438495 containerd[1482]: time="2025-09-13T00:02:54.437526376Z" level=info msg="StartContainer for \"2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701\"" Sep 13 00:02:54.465090 systemd[1]: Started cri-containerd-2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701.scope - libcontainer container 2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701. Sep 13 00:02:54.497223 containerd[1482]: time="2025-09-13T00:02:54.497089674Z" level=info msg="StartContainer for \"2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701\" returns successfully" Sep 13 00:02:55.239209 kubelet[2551]: I0913 00:02:55.239094 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-p6xts" podStartSLOduration=1.9188498090000001 podStartE2EDuration="4.239074518s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="2025-09-13 00:02:52.091417005 +0000 UTC m=+6.097907384" lastFinishedPulling="2025-09-13 00:02:54.411641674 +0000 UTC m=+8.418132093" observedRunningTime="2025-09-13 00:02:55.238372948 +0000 UTC m=+9.244863407" watchObservedRunningTime="2025-09-13 00:02:55.239074518 +0000 UTC m=+9.245565017" Sep 13 00:03:00.690490 sudo[1685]: pam_unix(sudo:session): session closed for user root Sep 13 00:03:00.849663 sshd[1682]: pam_unix(sshd:session): session closed for user core Sep 13 00:03:00.853304 systemd[1]: sshd@6-188.245.230.74:22-147.75.109.163:40920.service: Deactivated successfully. Sep 13 00:03:00.859274 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:03:00.859855 systemd[1]: session-7.scope: Consumed 6.405s CPU time, 151.2M memory peak, 0B memory swap peak. Sep 13 00:03:00.861811 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:03:00.864137 systemd-logind[1459]: Removed session 7. Sep 13 00:03:09.120089 systemd[1]: Created slice kubepods-besteffort-pod7a5dd558_436e_4eda_8c7c_b7cf21647de6.slice - libcontainer container kubepods-besteffort-pod7a5dd558_436e_4eda_8c7c_b7cf21647de6.slice. Sep 13 00:03:09.293481 kubelet[2551]: I0913 00:03:09.293401 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g87j\" (UniqueName: \"kubernetes.io/projected/7a5dd558-436e-4eda-8c7c-b7cf21647de6-kube-api-access-8g87j\") pod \"calico-typha-6ffb89fcc4-8tnq8\" (UID: \"7a5dd558-436e-4eda-8c7c-b7cf21647de6\") " pod="calico-system/calico-typha-6ffb89fcc4-8tnq8" Sep 13 00:03:09.294188 kubelet[2551]: I0913 00:03:09.293499 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a5dd558-436e-4eda-8c7c-b7cf21647de6-tigera-ca-bundle\") pod \"calico-typha-6ffb89fcc4-8tnq8\" (UID: \"7a5dd558-436e-4eda-8c7c-b7cf21647de6\") " pod="calico-system/calico-typha-6ffb89fcc4-8tnq8" Sep 13 00:03:09.294188 kubelet[2551]: I0913 00:03:09.293546 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7a5dd558-436e-4eda-8c7c-b7cf21647de6-typha-certs\") pod \"calico-typha-6ffb89fcc4-8tnq8\" (UID: \"7a5dd558-436e-4eda-8c7c-b7cf21647de6\") " pod="calico-system/calico-typha-6ffb89fcc4-8tnq8" Sep 13 00:03:09.374540 systemd[1]: Created slice kubepods-besteffort-pod7436b545_6b8f_4746_b62e_6bd1ff0bbb0c.slice - libcontainer container kubepods-besteffort-pod7436b545_6b8f_4746_b62e_6bd1ff0bbb0c.slice. Sep 13 00:03:09.428237 containerd[1482]: time="2025-09-13T00:03:09.428165868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ffb89fcc4-8tnq8,Uid:7a5dd558-436e-4eda-8c7c-b7cf21647de6,Namespace:calico-system,Attempt:0,}" Sep 13 00:03:09.465080 containerd[1482]: time="2025-09-13T00:03:09.464394608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:09.465080 containerd[1482]: time="2025-09-13T00:03:09.464456772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:09.465080 containerd[1482]: time="2025-09-13T00:03:09.464467692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:09.465080 containerd[1482]: time="2025-09-13T00:03:09.464552137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:09.497987 kubelet[2551]: I0913 00:03:09.497838 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-flexvol-driver-host\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.497987 kubelet[2551]: I0913 00:03:09.497880 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-node-certs\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.497987 kubelet[2551]: I0913 00:03:09.497958 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-cni-log-dir\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.498947 kubelet[2551]: I0913 00:03:09.498009 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-var-lib-calico\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.498947 kubelet[2551]: I0913 00:03:09.498038 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-cni-bin-dir\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.498947 kubelet[2551]: I0913 00:03:09.498054 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-cni-net-dir\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.498947 kubelet[2551]: I0913 00:03:09.498068 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-policysync\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.498947 kubelet[2551]: I0913 00:03:09.498085 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-tigera-ca-bundle\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.499058 kubelet[2551]: I0913 00:03:09.498113 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-lib-modules\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.499058 kubelet[2551]: I0913 00:03:09.498129 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-var-run-calico\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.499058 kubelet[2551]: I0913 00:03:09.498146 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9mc\" (UniqueName: \"kubernetes.io/projected/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-kube-api-access-hm9mc\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.499058 kubelet[2551]: I0913 00:03:09.498165 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7436b545-6b8f-4746-b62e-6bd1ff0bbb0c-xtables-lock\") pod \"calico-node-dp86n\" (UID: \"7436b545-6b8f-4746-b62e-6bd1ff0bbb0c\") " pod="calico-system/calico-node-dp86n" Sep 13 00:03:09.506177 systemd[1]: Started cri-containerd-407c3f5189ff847842093a0eca0cce2a6000587494a61cb59c05cfac5b0e13fa.scope - libcontainer container 407c3f5189ff847842093a0eca0cce2a6000587494a61cb59c05cfac5b0e13fa. Sep 13 00:03:09.509711 kubelet[2551]: E0913 00:03:09.509419 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r9g" podUID="96d6c8b4-b0e0-4570-9741-27188bdbb60e" Sep 13 00:03:09.602865 kubelet[2551]: E0913 00:03:09.602837 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.603154 kubelet[2551]: W0913 00:03:09.603002 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.603154 kubelet[2551]: E0913 00:03:09.603034 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.603622 kubelet[2551]: E0913 00:03:09.603467 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.603622 kubelet[2551]: W0913 00:03:09.603480 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.603622 kubelet[2551]: E0913 00:03:09.603492 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.604403 kubelet[2551]: E0913 00:03:09.604244 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.604403 kubelet[2551]: W0913 00:03:09.604262 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.604403 kubelet[2551]: E0913 00:03:09.604279 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.604704 kubelet[2551]: E0913 00:03:09.604655 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.604704 kubelet[2551]: W0913 00:03:09.604667 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.604704 kubelet[2551]: E0913 00:03:09.604683 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.605684 kubelet[2551]: E0913 00:03:09.605655 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.605684 kubelet[2551]: W0913 00:03:09.605679 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.605684 kubelet[2551]: E0913 00:03:09.605693 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.623870 kubelet[2551]: E0913 00:03:09.623835 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.623870 kubelet[2551]: W0913 00:03:09.623860 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.623870 kubelet[2551]: E0913 00:03:09.623881 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.643711 containerd[1482]: time="2025-09-13T00:03:09.643534222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ffb89fcc4-8tnq8,Uid:7a5dd558-436e-4eda-8c7c-b7cf21647de6,Namespace:calico-system,Attempt:0,} returns sandbox id \"407c3f5189ff847842093a0eca0cce2a6000587494a61cb59c05cfac5b0e13fa\"" Sep 13 00:03:09.650017 containerd[1482]: time="2025-09-13T00:03:09.649968296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:03:09.679859 containerd[1482]: time="2025-09-13T00:03:09.679811084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dp86n,Uid:7436b545-6b8f-4746-b62e-6bd1ff0bbb0c,Namespace:calico-system,Attempt:0,}" Sep 13 00:03:09.699765 kubelet[2551]: E0913 00:03:09.699417 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.699765 kubelet[2551]: W0913 00:03:09.699438 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.699765 kubelet[2551]: E0913 00:03:09.699458 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.699765 kubelet[2551]: I0913 00:03:09.699502 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/96d6c8b4-b0e0-4570-9741-27188bdbb60e-varrun\") pod \"csi-node-driver-54r9g\" (UID: \"96d6c8b4-b0e0-4570-9741-27188bdbb60e\") " pod="calico-system/csi-node-driver-54r9g" Sep 13 00:03:09.699765 kubelet[2551]: E0913 00:03:09.699681 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.699765 kubelet[2551]: W0913 00:03:09.699691 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.699765 kubelet[2551]: E0913 00:03:09.699713 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.699765 kubelet[2551]: I0913 00:03:09.699730 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96d6c8b4-b0e0-4570-9741-27188bdbb60e-socket-dir\") pod \"csi-node-driver-54r9g\" (UID: \"96d6c8b4-b0e0-4570-9741-27188bdbb60e\") " pod="calico-system/csi-node-driver-54r9g" Sep 13 00:03:09.700409 kubelet[2551]: E0913 00:03:09.699975 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.700409 kubelet[2551]: W0913 00:03:09.700009 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.700409 kubelet[2551]: E0913 00:03:09.700031 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.700409 kubelet[2551]: E0913 00:03:09.700276 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.700409 kubelet[2551]: W0913 00:03:09.700291 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.700409 kubelet[2551]: E0913 00:03:09.700311 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.700409 kubelet[2551]: I0913 00:03:09.700335 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96d6c8b4-b0e0-4570-9741-27188bdbb60e-registration-dir\") pod \"csi-node-driver-54r9g\" (UID: \"96d6c8b4-b0e0-4570-9741-27188bdbb60e\") " pod="calico-system/csi-node-driver-54r9g" Sep 13 00:03:09.700908 kubelet[2551]: E0913 00:03:09.700535 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.700908 kubelet[2551]: W0913 00:03:09.700545 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.700908 kubelet[2551]: E0913 00:03:09.700562 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.700908 kubelet[2551]: E0913 00:03:09.700746 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.700908 kubelet[2551]: W0913 00:03:09.700754 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.700908 kubelet[2551]: E0913 00:03:09.700762 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.701282 kubelet[2551]: E0913 00:03:09.700929 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.701282 kubelet[2551]: W0913 00:03:09.700937 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.701282 kubelet[2551]: E0913 00:03:09.700954 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.701282 kubelet[2551]: I0913 00:03:09.700972 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8sq\" (UniqueName: \"kubernetes.io/projected/96d6c8b4-b0e0-4570-9741-27188bdbb60e-kube-api-access-th8sq\") pod \"csi-node-driver-54r9g\" (UID: \"96d6c8b4-b0e0-4570-9741-27188bdbb60e\") " pod="calico-system/csi-node-driver-54r9g" Sep 13 00:03:09.701282 kubelet[2551]: E0913 00:03:09.701149 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.701282 kubelet[2551]: W0913 00:03:09.701159 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.701282 kubelet[2551]: E0913 00:03:09.701173 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.701660 kubelet[2551]: E0913 00:03:09.701321 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.701660 kubelet[2551]: W0913 00:03:09.701329 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.701660 kubelet[2551]: E0913 00:03:09.701337 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.702943 kubelet[2551]: E0913 00:03:09.702922 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.703044 kubelet[2551]: W0913 00:03:09.703031 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.703221 kubelet[2551]: E0913 00:03:09.703119 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.703347 kubelet[2551]: E0913 00:03:09.703336 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.703518 kubelet[2551]: W0913 00:03:09.703401 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.703518 kubelet[2551]: E0913 00:03:09.703417 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.703791 kubelet[2551]: E0913 00:03:09.703657 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.703791 kubelet[2551]: W0913 00:03:09.703669 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.703791 kubelet[2551]: E0913 00:03:09.703697 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.704084 kubelet[2551]: E0913 00:03:09.703955 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.704084 kubelet[2551]: W0913 00:03:09.703968 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.704084 kubelet[2551]: E0913 00:03:09.703981 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.704454 kubelet[2551]: I0913 00:03:09.704230 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96d6c8b4-b0e0-4570-9741-27188bdbb60e-kubelet-dir\") pod \"csi-node-driver-54r9g\" (UID: \"96d6c8b4-b0e0-4570-9741-27188bdbb60e\") " pod="calico-system/csi-node-driver-54r9g" Sep 13 00:03:09.705487 kubelet[2551]: E0913 00:03:09.704592 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.705487 kubelet[2551]: W0913 00:03:09.705310 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.705487 kubelet[2551]: E0913 00:03:09.705324 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.705969 kubelet[2551]: E0913 00:03:09.705814 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.705969 kubelet[2551]: W0913 00:03:09.705829 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.705969 kubelet[2551]: E0913 00:03:09.705841 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.724455 containerd[1482]: time="2025-09-13T00:03:09.723845501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:09.724455 containerd[1482]: time="2025-09-13T00:03:09.724316490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:09.724455 containerd[1482]: time="2025-09-13T00:03:09.724330011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:09.725251 containerd[1482]: time="2025-09-13T00:03:09.724994652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:09.751094 systemd[1]: Started cri-containerd-bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307.scope - libcontainer container bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307. Sep 13 00:03:09.805953 kubelet[2551]: E0913 00:03:09.805566 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.805953 kubelet[2551]: W0913 00:03:09.805592 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.806939 kubelet[2551]: E0913 00:03:09.806140 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.807137 kubelet[2551]: E0913 00:03:09.807081 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.807240 kubelet[2551]: W0913 00:03:09.807211 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.807288 kubelet[2551]: E0913 00:03:09.807240 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.807673 kubelet[2551]: E0913 00:03:09.807543 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.807673 kubelet[2551]: W0913 00:03:09.807561 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.807673 kubelet[2551]: E0913 00:03:09.807581 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.808304 kubelet[2551]: E0913 00:03:09.808202 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.808810 kubelet[2551]: W0913 00:03:09.808584 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.808810 kubelet[2551]: E0913 00:03:09.808614 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.809688 kubelet[2551]: E0913 00:03:09.809554 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.809688 kubelet[2551]: W0913 00:03:09.809571 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.809688 kubelet[2551]: E0913 00:03:09.809621 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.810288 containerd[1482]: time="2025-09-13T00:03:09.809632757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dp86n,Uid:7436b545-6b8f-4746-b62e-6bd1ff0bbb0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307\"" Sep 13 00:03:09.810349 kubelet[2551]: E0913 00:03:09.809881 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.810349 kubelet[2551]: W0913 00:03:09.809891 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.810349 kubelet[2551]: E0913 00:03:09.810031 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.811476 kubelet[2551]: E0913 00:03:09.810964 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.811476 kubelet[2551]: W0913 00:03:09.810980 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.811476 kubelet[2551]: E0913 00:03:09.811147 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.811654 kubelet[2551]: E0913 00:03:09.811467 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.811654 kubelet[2551]: W0913 00:03:09.811578 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.812271 kubelet[2551]: E0913 00:03:09.811882 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.813097 kubelet[2551]: W0913 00:03:09.812485 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.813097 kubelet[2551]: E0913 00:03:09.812357 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.813097 kubelet[2551]: E0913 00:03:09.812836 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.815291 kubelet[2551]: E0913 00:03:09.815258 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.815291 kubelet[2551]: W0913 00:03:09.815285 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.815722 kubelet[2551]: E0913 00:03:09.815584 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.815722 kubelet[2551]: W0913 00:03:09.815605 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.816202 kubelet[2551]: E0913 00:03:09.816181 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.816202 kubelet[2551]: W0913 00:03:09.816200 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.817846 kubelet[2551]: E0913 00:03:09.817806 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.817846 kubelet[2551]: W0913 00:03:09.817842 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.817972 kubelet[2551]: E0913 00:03:09.816848 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.817972 kubelet[2551]: E0913 00:03:09.817970 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.818504 kubelet[2551]: E0913 00:03:09.818483 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.818504 kubelet[2551]: W0913 00:03:09.818501 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.818665 kubelet[2551]: E0913 00:03:09.818512 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.818665 kubelet[2551]: E0913 00:03:09.818325 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.818665 kubelet[2551]: E0913 00:03:09.818605 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.818959 kubelet[2551]: E0913 00:03:09.818938 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.818959 kubelet[2551]: W0913 00:03:09.818958 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.819453 kubelet[2551]: E0913 00:03:09.818970 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.819540 kubelet[2551]: E0913 00:03:09.819516 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.819540 kubelet[2551]: W0913 00:03:09.819537 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.819638 kubelet[2551]: E0913 00:03:09.819620 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.819767 kubelet[2551]: E0913 00:03:09.819744 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.819767 kubelet[2551]: W0913 00:03:09.819757 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.819825 kubelet[2551]: E0913 00:03:09.819771 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.820081 kubelet[2551]: E0913 00:03:09.820060 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.820081 kubelet[2551]: W0913 00:03:09.820075 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.820183 kubelet[2551]: E0913 00:03:09.820094 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.820847 kubelet[2551]: E0913 00:03:09.820495 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.820847 kubelet[2551]: W0913 00:03:09.820516 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.820847 kubelet[2551]: E0913 00:03:09.820624 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.820847 kubelet[2551]: E0913 00:03:09.820729 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.820847 kubelet[2551]: W0913 00:03:09.820736 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.820847 kubelet[2551]: E0913 00:03:09.820845 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.821042 kubelet[2551]: E0913 00:03:09.820970 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.821042 kubelet[2551]: W0913 00:03:09.820977 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.821042 kubelet[2551]: E0913 00:03:09.820990 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.821202 kubelet[2551]: E0913 00:03:09.821179 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.821202 kubelet[2551]: W0913 00:03:09.821195 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.821271 kubelet[2551]: E0913 00:03:09.821210 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.823302 kubelet[2551]: E0913 00:03:09.823276 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.823302 kubelet[2551]: W0913 00:03:09.823297 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.823531 kubelet[2551]: E0913 00:03:09.823472 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.823788 kubelet[2551]: E0913 00:03:09.823684 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.823788 kubelet[2551]: W0913 00:03:09.823715 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.823788 kubelet[2551]: E0913 00:03:09.823729 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.824029 kubelet[2551]: E0913 00:03:09.824008 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.824073 kubelet[2551]: W0913 00:03:09.824038 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.824073 kubelet[2551]: E0913 00:03:09.824050 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:09.838546 kubelet[2551]: E0913 00:03:09.838513 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:09.838546 kubelet[2551]: W0913 00:03:09.838539 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:09.838774 kubelet[2551]: E0913 00:03:09.838561 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:11.053510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145766635.mount: Deactivated successfully. Sep 13 00:03:11.160992 kubelet[2551]: E0913 00:03:11.158591 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r9g" podUID="96d6c8b4-b0e0-4570-9741-27188bdbb60e" Sep 13 00:03:12.214002 containerd[1482]: time="2025-09-13T00:03:12.213872213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:12.215632 containerd[1482]: time="2025-09-13T00:03:12.215583384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 13 00:03:12.216928 containerd[1482]: time="2025-09-13T00:03:12.216838891Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:12.219637 containerd[1482]: time="2025-09-13T00:03:12.219574517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:12.221275 containerd[1482]: time="2025-09-13T00:03:12.221209244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.571182825s" Sep 13 00:03:12.221275 containerd[1482]: time="2025-09-13T00:03:12.221261527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:03:12.223275 containerd[1482]: time="2025-09-13T00:03:12.223236032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:03:12.243027 containerd[1482]: time="2025-09-13T00:03:12.242445696Z" level=info msg="CreateContainer within sandbox \"407c3f5189ff847842093a0eca0cce2a6000587494a61cb59c05cfac5b0e13fa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:03:12.262649 containerd[1482]: time="2025-09-13T00:03:12.262605050Z" level=info msg="CreateContainer within sandbox \"407c3f5189ff847842093a0eca0cce2a6000587494a61cb59c05cfac5b0e13fa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d72b96d6eb1285e3cd20dc5a62f9c865e261ae78f6941a111dcc6f6d4b840937\"" Sep 13 00:03:12.263553 containerd[1482]: time="2025-09-13T00:03:12.263532739Z" level=info msg="StartContainer for \"d72b96d6eb1285e3cd20dc5a62f9c865e261ae78f6941a111dcc6f6d4b840937\"" Sep 13 00:03:12.298432 systemd[1]: Started cri-containerd-d72b96d6eb1285e3cd20dc5a62f9c865e261ae78f6941a111dcc6f6d4b840937.scope - libcontainer container d72b96d6eb1285e3cd20dc5a62f9c865e261ae78f6941a111dcc6f6d4b840937. Sep 13 00:03:12.343978 containerd[1482]: time="2025-09-13T00:03:12.343820857Z" level=info msg="StartContainer for \"d72b96d6eb1285e3cd20dc5a62f9c865e261ae78f6941a111dcc6f6d4b840937\" returns successfully" Sep 13 00:03:13.158154 kubelet[2551]: E0913 00:03:13.158064 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r9g" podUID="96d6c8b4-b0e0-4570-9741-27188bdbb60e" Sep 13 00:03:13.305081 kubelet[2551]: I0913 00:03:13.305015 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6ffb89fcc4-8tnq8" podStartSLOduration=1.7295850019999999 podStartE2EDuration="4.304995276s" podCreationTimestamp="2025-09-13 00:03:09 +0000 UTC" firstStartedPulling="2025-09-13 00:03:09.647029636 +0000 UTC m=+23.653520015" lastFinishedPulling="2025-09-13 00:03:12.22243987 +0000 UTC m=+26.228930289" observedRunningTime="2025-09-13 00:03:13.291605794 +0000 UTC m=+27.298096213" watchObservedRunningTime="2025-09-13 00:03:13.304995276 +0000 UTC m=+27.311485655" Sep 13 00:03:13.324815 kubelet[2551]: E0913 00:03:13.324217 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.324815 kubelet[2551]: W0913 00:03:13.324253 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.324815 kubelet[2551]: E0913 00:03:13.324276 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.325172 kubelet[2551]: E0913 00:03:13.324984 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.325172 kubelet[2551]: W0913 00:03:13.324997 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.325172 kubelet[2551]: E0913 00:03:13.325010 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.327041 kubelet[2551]: E0913 00:03:13.326610 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.327041 kubelet[2551]: W0913 00:03:13.326626 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.327041 kubelet[2551]: E0913 00:03:13.326639 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.328126 kubelet[2551]: E0913 00:03:13.328023 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.328126 kubelet[2551]: W0913 00:03:13.328038 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.328126 kubelet[2551]: E0913 00:03:13.328051 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.328505 kubelet[2551]: E0913 00:03:13.328466 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.328505 kubelet[2551]: W0913 00:03:13.328478 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.328680 kubelet[2551]: E0913 00:03:13.328582 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.328914 kubelet[2551]: E0913 00:03:13.328845 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.328914 kubelet[2551]: W0913 00:03:13.328860 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.329132 kubelet[2551]: E0913 00:03:13.329055 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.330920 kubelet[2551]: E0913 00:03:13.330826 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.330920 kubelet[2551]: W0913 00:03:13.330849 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.330920 kubelet[2551]: E0913 00:03:13.330862 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.331543 kubelet[2551]: E0913 00:03:13.331350 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.331543 kubelet[2551]: W0913 00:03:13.331364 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.331543 kubelet[2551]: E0913 00:03:13.331376 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.332303 kubelet[2551]: E0913 00:03:13.332198 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.332443 kubelet[2551]: W0913 00:03:13.332383 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.332443 kubelet[2551]: E0913 00:03:13.332403 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.332879 kubelet[2551]: E0913 00:03:13.332783 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.332879 kubelet[2551]: W0913 00:03:13.332795 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.332879 kubelet[2551]: E0913 00:03:13.332836 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.333250 kubelet[2551]: E0913 00:03:13.333224 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.333363 kubelet[2551]: W0913 00:03:13.333308 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.333363 kubelet[2551]: E0913 00:03:13.333325 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.333687 kubelet[2551]: E0913 00:03:13.333587 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.333687 kubelet[2551]: W0913 00:03:13.333597 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.333687 kubelet[2551]: E0913 00:03:13.333607 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.334189 kubelet[2551]: E0913 00:03:13.334056 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.334189 kubelet[2551]: W0913 00:03:13.334069 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.334189 kubelet[2551]: E0913 00:03:13.334083 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.335054 kubelet[2551]: E0913 00:03:13.334838 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.335054 kubelet[2551]: W0913 00:03:13.334852 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.335054 kubelet[2551]: E0913 00:03:13.334864 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.335428 kubelet[2551]: E0913 00:03:13.335307 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.335428 kubelet[2551]: W0913 00:03:13.335320 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.335428 kubelet[2551]: E0913 00:03:13.335332 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.340524 kubelet[2551]: E0913 00:03:13.340506 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.340524 kubelet[2551]: W0913 00:03:13.340522 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.340713 kubelet[2551]: E0913 00:03:13.340535 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.340809 kubelet[2551]: E0913 00:03:13.340799 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.340955 kubelet[2551]: W0913 00:03:13.340809 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.340955 kubelet[2551]: E0913 00:03:13.340822 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.341529 kubelet[2551]: E0913 00:03:13.341512 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.341529 kubelet[2551]: W0913 00:03:13.341528 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.341616 kubelet[2551]: E0913 00:03:13.341549 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.341921 kubelet[2551]: E0913 00:03:13.341891 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.342017 kubelet[2551]: W0913 00:03:13.341923 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.342017 kubelet[2551]: E0913 00:03:13.341939 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.342122 kubelet[2551]: E0913 00:03:13.342111 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.342197 kubelet[2551]: W0913 00:03:13.342122 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.342197 kubelet[2551]: E0913 00:03:13.342168 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.342318 kubelet[2551]: E0913 00:03:13.342303 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.342318 kubelet[2551]: W0913 00:03:13.342316 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.342434 kubelet[2551]: E0913 00:03:13.342368 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.342491 kubelet[2551]: E0913 00:03:13.342482 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.342580 kubelet[2551]: W0913 00:03:13.342491 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.342580 kubelet[2551]: E0913 00:03:13.342540 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.342630 kubelet[2551]: E0913 00:03:13.342625 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.342666 kubelet[2551]: W0913 00:03:13.342632 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.342666 kubelet[2551]: E0913 00:03:13.342646 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.342861 kubelet[2551]: E0913 00:03:13.342849 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.342861 kubelet[2551]: W0913 00:03:13.342861 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.342928 kubelet[2551]: E0913 00:03:13.342875 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.343308 kubelet[2551]: E0913 00:03:13.343197 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.343308 kubelet[2551]: W0913 00:03:13.343211 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.343308 kubelet[2551]: E0913 00:03:13.343227 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.343566 kubelet[2551]: E0913 00:03:13.343467 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.343566 kubelet[2551]: W0913 00:03:13.343479 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.343678 kubelet[2551]: E0913 00:03:13.343668 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.343784 kubelet[2551]: W0913 00:03:13.343733 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.343784 kubelet[2551]: E0913 00:03:13.343747 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.343848 kubelet[2551]: E0913 00:03:13.343712 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.344189 kubelet[2551]: E0913 00:03:13.344037 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.344189 kubelet[2551]: W0913 00:03:13.344071 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.344189 kubelet[2551]: E0913 00:03:13.344089 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.344500 kubelet[2551]: E0913 00:03:13.344360 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.344500 kubelet[2551]: W0913 00:03:13.344372 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.344500 kubelet[2551]: E0913 00:03:13.344390 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.344631 kubelet[2551]: E0913 00:03:13.344607 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.344666 kubelet[2551]: W0913 00:03:13.344647 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.344690 kubelet[2551]: E0913 00:03:13.344666 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.345260 kubelet[2551]: E0913 00:03:13.345222 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.345463 kubelet[2551]: W0913 00:03:13.345336 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.345463 kubelet[2551]: E0913 00:03:13.345360 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.345717 kubelet[2551]: E0913 00:03:13.345689 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.345717 kubelet[2551]: W0913 00:03:13.345702 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.345884 kubelet[2551]: E0913 00:03:13.345795 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.346057 kubelet[2551]: E0913 00:03:13.346017 2551 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:03:13.346057 kubelet[2551]: W0913 00:03:13.346028 2551 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:03:13.346057 kubelet[2551]: E0913 00:03:13.346039 2551 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:03:13.537127 containerd[1482]: time="2025-09-13T00:03:13.537044377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:13.540096 containerd[1482]: time="2025-09-13T00:03:13.539124483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 13 00:03:13.541875 containerd[1482]: time="2025-09-13T00:03:13.541466843Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:13.548322 containerd[1482]: time="2025-09-13T00:03:13.547427586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:13.548322 containerd[1482]: time="2025-09-13T00:03:13.548166344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.324892029s" Sep 13 00:03:13.548322 containerd[1482]: time="2025-09-13T00:03:13.548201346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:03:13.555189 containerd[1482]: time="2025-09-13T00:03:13.555142779Z" level=info msg="CreateContainer within sandbox \"bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:03:13.575367 containerd[1482]: time="2025-09-13T00:03:13.575318927Z" level=info msg="CreateContainer within sandbox \"bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678\"" Sep 13 00:03:13.576705 containerd[1482]: time="2025-09-13T00:03:13.576672676Z" level=info msg="StartContainer for \"903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678\"" Sep 13 00:03:13.612302 systemd[1]: Started cri-containerd-903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678.scope - libcontainer container 903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678. Sep 13 00:03:13.656282 containerd[1482]: time="2025-09-13T00:03:13.656195687Z" level=info msg="StartContainer for \"903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678\" returns successfully" Sep 13 00:03:13.677161 systemd[1]: cri-containerd-903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678.scope: Deactivated successfully. Sep 13 00:03:13.712725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678-rootfs.mount: Deactivated successfully. Sep 13 00:03:13.809355 containerd[1482]: time="2025-09-13T00:03:13.809043074Z" level=info msg="shim disconnected" id=903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678 namespace=k8s.io Sep 13 00:03:13.809355 containerd[1482]: time="2025-09-13T00:03:13.809107997Z" level=warning msg="cleaning up after shim disconnected" id=903746fd28c2b6423083c6fb4177684addaa57b626174b0a908983b4477cb678 namespace=k8s.io Sep 13 00:03:13.809355 containerd[1482]: time="2025-09-13T00:03:13.809120958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:03:14.282676 containerd[1482]: time="2025-09-13T00:03:14.282631023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:03:15.158085 kubelet[2551]: E0913 00:03:15.157999 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r9g" podUID="96d6c8b4-b0e0-4570-9741-27188bdbb60e" Sep 13 00:03:16.584088 containerd[1482]: time="2025-09-13T00:03:16.584023582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:16.587257 containerd[1482]: time="2025-09-13T00:03:16.587051518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 00:03:16.587257 containerd[1482]: time="2025-09-13T00:03:16.587214125Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:16.592942 containerd[1482]: time="2025-09-13T00:03:16.592724412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:16.593539 containerd[1482]: time="2025-09-13T00:03:16.593504287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.310735897s" Sep 13 00:03:16.593591 containerd[1482]: time="2025-09-13T00:03:16.593541488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:03:16.596733 containerd[1482]: time="2025-09-13T00:03:16.596472460Z" level=info msg="CreateContainer within sandbox \"bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:03:16.615652 containerd[1482]: time="2025-09-13T00:03:16.615510552Z" level=info msg="CreateContainer within sandbox \"bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b\"" Sep 13 00:03:16.617134 containerd[1482]: time="2025-09-13T00:03:16.616752848Z" level=info msg="StartContainer for \"40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b\"" Sep 13 00:03:16.659272 systemd[1]: Started cri-containerd-40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b.scope - libcontainer container 40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b. Sep 13 00:03:16.697906 containerd[1482]: time="2025-09-13T00:03:16.697855199Z" level=info msg="StartContainer for \"40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b\" returns successfully" Sep 13 00:03:17.158711 kubelet[2551]: E0913 00:03:17.158213 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r9g" podUID="96d6c8b4-b0e0-4570-9741-27188bdbb60e" Sep 13 00:03:17.188216 containerd[1482]: time="2025-09-13T00:03:17.188159816Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:03:17.192750 systemd[1]: cri-containerd-40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b.scope: Deactivated successfully. Sep 13 00:03:17.218745 kubelet[2551]: I0913 00:03:17.217983 2551 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:03:17.269313 kubelet[2551]: W0913 00:03:17.269281 2551 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-5-n-d78c7abf5e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-d78c7abf5e' and this object Sep 13 00:03:17.269615 kubelet[2551]: E0913 00:03:17.269588 2551 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-5-n-d78c7abf5e\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-d78c7abf5e' and this object" logger="UnhandledError" Sep 13 00:03:17.272280 kubelet[2551]: I0913 00:03:17.271083 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl2z8\" (UniqueName: \"kubernetes.io/projected/ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d-kube-api-access-hl2z8\") pod \"coredns-7c65d6cfc9-6fdv4\" (UID: \"ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d\") " pod="kube-system/coredns-7c65d6cfc9-6fdv4" Sep 13 00:03:17.272280 kubelet[2551]: I0913 00:03:17.271133 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d-config-volume\") pod \"coredns-7c65d6cfc9-6fdv4\" (UID: \"ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d\") " pod="kube-system/coredns-7c65d6cfc9-6fdv4" Sep 13 00:03:17.273377 systemd[1]: Created slice kubepods-burstable-podff0975d8_c7ba_42bd_bf6d_aeeda659ab5d.slice - libcontainer container kubepods-burstable-podff0975d8_c7ba_42bd_bf6d_aeeda659ab5d.slice. Sep 13 00:03:17.288250 containerd[1482]: time="2025-09-13T00:03:17.288192315Z" level=info msg="shim disconnected" id=40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b namespace=k8s.io Sep 13 00:03:17.288554 containerd[1482]: time="2025-09-13T00:03:17.288533410Z" level=warning msg="cleaning up after shim disconnected" id=40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b namespace=k8s.io Sep 13 00:03:17.289700 containerd[1482]: time="2025-09-13T00:03:17.288606973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:03:17.304694 systemd[1]: Created slice kubepods-besteffort-podc62008ae_00ba_4b2b_b2d4_a36f45e38687.slice - libcontainer container kubepods-besteffort-podc62008ae_00ba_4b2b_b2d4_a36f45e38687.slice. Sep 13 00:03:17.320597 systemd[1]: Created slice kubepods-besteffort-pod760badfc_d54f_4563_b3b8_27b25aca93e2.slice - libcontainer container kubepods-besteffort-pod760badfc_d54f_4563_b3b8_27b25aca93e2.slice. Sep 13 00:03:17.327998 containerd[1482]: time="2025-09-13T00:03:17.327947504Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:03:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:03:17.334392 systemd[1]: Created slice kubepods-besteffort-pod5bc73e8e_3e70_4fad_b2fe_fc395c348d63.slice - libcontainer container kubepods-besteffort-pod5bc73e8e_3e70_4fad_b2fe_fc395c348d63.slice. Sep 13 00:03:17.342626 systemd[1]: Created slice kubepods-burstable-pod8a8ab913_32e4_496a_82e5_7f7a7a768ff5.slice - libcontainer container kubepods-burstable-pod8a8ab913_32e4_496a_82e5_7f7a7a768ff5.slice. Sep 13 00:03:17.352442 systemd[1]: Created slice kubepods-besteffort-podce5c1cdb_0fa6_4674_884e_c8bb9a2b4451.slice - libcontainer container kubepods-besteffort-podce5c1cdb_0fa6_4674_884e_c8bb9a2b4451.slice. Sep 13 00:03:17.361083 systemd[1]: Created slice kubepods-besteffort-pod59113340_495e_48cd_a556_b5807ae1b886.slice - libcontainer container kubepods-besteffort-pod59113340_495e_48cd_a556_b5807ae1b886.slice. Sep 13 00:03:17.371607 kubelet[2551]: I0913 00:03:17.371576 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z82df\" (UniqueName: \"kubernetes.io/projected/760badfc-d54f-4563-b3b8-27b25aca93e2-kube-api-access-z82df\") pod \"calico-apiserver-f9bf767ff-9zn6h\" (UID: \"760badfc-d54f-4563-b3b8-27b25aca93e2\") " pod="calico-apiserver/calico-apiserver-f9bf767ff-9zn6h" Sep 13 00:03:17.371855 kubelet[2551]: I0913 00:03:17.371840 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/59113340-495e-48cd-a556-b5807ae1b886-calico-apiserver-certs\") pod \"calico-apiserver-5889f7d879-488cx\" (UID: \"59113340-495e-48cd-a556-b5807ae1b886\") " pod="calico-apiserver/calico-apiserver-5889f7d879-488cx" Sep 13 00:03:17.372041 kubelet[2551]: I0913 00:03:17.372004 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd6bj\" (UniqueName: \"kubernetes.io/projected/59113340-495e-48cd-a556-b5807ae1b886-kube-api-access-qd6bj\") pod \"calico-apiserver-5889f7d879-488cx\" (UID: \"59113340-495e-48cd-a556-b5807ae1b886\") " pod="calico-apiserver/calico-apiserver-5889f7d879-488cx" Sep 13 00:03:17.373044 systemd[1]: Created slice kubepods-besteffort-pod9ec71789_a324_43ba_b57e_8169dfe5b109.slice - libcontainer container kubepods-besteffort-pod9ec71789_a324_43ba_b57e_8169dfe5b109.slice. Sep 13 00:03:17.374603 kubelet[2551]: I0913 00:03:17.373177 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ec71789-a324-43ba-b57e-8169dfe5b109-config\") pod \"goldmane-7988f88666-qj7nd\" (UID: \"9ec71789-a324-43ba-b57e-8169dfe5b109\") " pod="calico-system/goldmane-7988f88666-qj7nd" Sep 13 00:03:17.374603 kubelet[2551]: I0913 00:03:17.373206 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/760badfc-d54f-4563-b3b8-27b25aca93e2-calico-apiserver-certs\") pod \"calico-apiserver-f9bf767ff-9zn6h\" (UID: \"760badfc-d54f-4563-b3b8-27b25aca93e2\") " pod="calico-apiserver/calico-apiserver-f9bf767ff-9zn6h" Sep 13 00:03:17.374603 kubelet[2551]: I0913 00:03:17.373236 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9ec71789-a324-43ba-b57e-8169dfe5b109-goldmane-key-pair\") pod \"goldmane-7988f88666-qj7nd\" (UID: \"9ec71789-a324-43ba-b57e-8169dfe5b109\") " pod="calico-system/goldmane-7988f88666-qj7nd" Sep 13 00:03:17.374603 kubelet[2551]: I0913 00:03:17.373255 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c62008ae-00ba-4b2b-b2d4-a36f45e38687-calico-apiserver-certs\") pod \"calico-apiserver-f9bf767ff-65nq2\" (UID: \"c62008ae-00ba-4b2b-b2d4-a36f45e38687\") " pod="calico-apiserver/calico-apiserver-f9bf767ff-65nq2" Sep 13 00:03:17.374603 kubelet[2551]: I0913 00:03:17.373271 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-whisker-backend-key-pair\") pod \"whisker-57fbb95f8d-fhz2z\" (UID: \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\") " pod="calico-system/whisker-57fbb95f8d-fhz2z" Sep 13 00:03:17.374730 kubelet[2551]: I0913 00:03:17.373295 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqt4\" (UniqueName: \"kubernetes.io/projected/8a8ab913-32e4-496a-82e5-7f7a7a768ff5-kube-api-access-rjqt4\") pod \"coredns-7c65d6cfc9-s5g7w\" (UID: \"8a8ab913-32e4-496a-82e5-7f7a7a768ff5\") " pod="kube-system/coredns-7c65d6cfc9-s5g7w" Sep 13 00:03:17.374730 kubelet[2551]: I0913 00:03:17.373314 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a8ab913-32e4-496a-82e5-7f7a7a768ff5-config-volume\") pod \"coredns-7c65d6cfc9-s5g7w\" (UID: \"8a8ab913-32e4-496a-82e5-7f7a7a768ff5\") " pod="kube-system/coredns-7c65d6cfc9-s5g7w" Sep 13 00:03:17.374730 kubelet[2551]: I0913 00:03:17.373332 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ec71789-a324-43ba-b57e-8169dfe5b109-goldmane-ca-bundle\") pod \"goldmane-7988f88666-qj7nd\" (UID: \"9ec71789-a324-43ba-b57e-8169dfe5b109\") " pod="calico-system/goldmane-7988f88666-qj7nd" Sep 13 00:03:17.374730 kubelet[2551]: I0913 00:03:17.373353 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8l4r\" (UniqueName: \"kubernetes.io/projected/5bc73e8e-3e70-4fad-b2fe-fc395c348d63-kube-api-access-g8l4r\") pod \"calico-kube-controllers-5559ff8dc6-m5kcg\" (UID: \"5bc73e8e-3e70-4fad-b2fe-fc395c348d63\") " pod="calico-system/calico-kube-controllers-5559ff8dc6-m5kcg" Sep 13 00:03:17.374730 kubelet[2551]: I0913 00:03:17.373402 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2tn\" (UniqueName: \"kubernetes.io/projected/c62008ae-00ba-4b2b-b2d4-a36f45e38687-kube-api-access-xs2tn\") pod \"calico-apiserver-f9bf767ff-65nq2\" (UID: \"c62008ae-00ba-4b2b-b2d4-a36f45e38687\") " pod="calico-apiserver/calico-apiserver-f9bf767ff-65nq2" Sep 13 00:03:17.374839 kubelet[2551]: I0913 00:03:17.373442 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mffmh\" (UniqueName: \"kubernetes.io/projected/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-kube-api-access-mffmh\") pod \"whisker-57fbb95f8d-fhz2z\" (UID: \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\") " pod="calico-system/whisker-57fbb95f8d-fhz2z" Sep 13 00:03:17.374839 kubelet[2551]: I0913 00:03:17.373461 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dsvp\" (UniqueName: \"kubernetes.io/projected/9ec71789-a324-43ba-b57e-8169dfe5b109-kube-api-access-4dsvp\") pod \"goldmane-7988f88666-qj7nd\" (UID: \"9ec71789-a324-43ba-b57e-8169dfe5b109\") " pod="calico-system/goldmane-7988f88666-qj7nd" Sep 13 00:03:17.374839 kubelet[2551]: I0913 00:03:17.373487 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-whisker-ca-bundle\") pod \"whisker-57fbb95f8d-fhz2z\" (UID: \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\") " pod="calico-system/whisker-57fbb95f8d-fhz2z" Sep 13 00:03:17.374839 kubelet[2551]: I0913 00:03:17.373504 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bc73e8e-3e70-4fad-b2fe-fc395c348d63-tigera-ca-bundle\") pod \"calico-kube-controllers-5559ff8dc6-m5kcg\" (UID: \"5bc73e8e-3e70-4fad-b2fe-fc395c348d63\") " pod="calico-system/calico-kube-controllers-5559ff8dc6-m5kcg" Sep 13 00:03:17.616194 containerd[1482]: time="2025-09-13T00:03:17.614877314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9bf767ff-65nq2,Uid:c62008ae-00ba-4b2b-b2d4-a36f45e38687,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:03:17.629597 containerd[1482]: time="2025-09-13T00:03:17.629488782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9bf767ff-9zn6h,Uid:760badfc-d54f-4563-b3b8-27b25aca93e2,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:03:17.645384 containerd[1482]: time="2025-09-13T00:03:17.644414503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5559ff8dc6-m5kcg,Uid:5bc73e8e-3e70-4fad-b2fe-fc395c348d63,Namespace:calico-system,Attempt:0,}" Sep 13 00:03:17.652576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40aaf1bee8d0ad7c94c82113fb32bb67dd56f10b45d0edbe64f3fc2a3e337d3b-rootfs.mount: Deactivated successfully. Sep 13 00:03:17.663079 containerd[1482]: time="2025-09-13T00:03:17.661709647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fbb95f8d-fhz2z,Uid:ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451,Namespace:calico-system,Attempt:0,}" Sep 13 00:03:17.674945 containerd[1482]: time="2025-09-13T00:03:17.672453388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889f7d879-488cx,Uid:59113340-495e-48cd-a556-b5807ae1b886,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:03:17.680342 containerd[1482]: time="2025-09-13T00:03:17.680259804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-qj7nd,Uid:9ec71789-a324-43ba-b57e-8169dfe5b109,Namespace:calico-system,Attempt:0,}" Sep 13 00:03:17.818959 containerd[1482]: time="2025-09-13T00:03:17.818871961Z" level=error msg="Failed to destroy network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.820631 containerd[1482]: time="2025-09-13T00:03:17.820520151Z" level=error msg="encountered an error cleaning up failed sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.820785 containerd[1482]: time="2025-09-13T00:03:17.820746361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5559ff8dc6-m5kcg,Uid:5bc73e8e-3e70-4fad-b2fe-fc395c348d63,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.821277 kubelet[2551]: E0913 00:03:17.821229 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.821390 kubelet[2551]: E0913 00:03:17.821347 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5559ff8dc6-m5kcg" Sep 13 00:03:17.821390 kubelet[2551]: E0913 00:03:17.821382 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5559ff8dc6-m5kcg" Sep 13 00:03:17.821469 kubelet[2551]: E0913 00:03:17.821430 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5559ff8dc6-m5kcg_calico-system(5bc73e8e-3e70-4fad-b2fe-fc395c348d63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5559ff8dc6-m5kcg_calico-system(5bc73e8e-3e70-4fad-b2fe-fc395c348d63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5559ff8dc6-m5kcg" podUID="5bc73e8e-3e70-4fad-b2fe-fc395c348d63" Sep 13 00:03:17.830681 containerd[1482]: time="2025-09-13T00:03:17.830623386Z" level=error msg="Failed to destroy network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.831231 containerd[1482]: time="2025-09-13T00:03:17.831195450Z" level=error msg="encountered an error cleaning up failed sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.831311 containerd[1482]: time="2025-09-13T00:03:17.831253133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9bf767ff-65nq2,Uid:c62008ae-00ba-4b2b-b2d4-a36f45e38687,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.831528 kubelet[2551]: E0913 00:03:17.831467 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.831587 kubelet[2551]: E0913 00:03:17.831547 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f9bf767ff-65nq2" Sep 13 00:03:17.831587 kubelet[2551]: E0913 00:03:17.831566 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f9bf767ff-65nq2" Sep 13 00:03:17.831646 kubelet[2551]: E0913 00:03:17.831606 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f9bf767ff-65nq2_calico-apiserver(c62008ae-00ba-4b2b-b2d4-a36f45e38687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f9bf767ff-65nq2_calico-apiserver(c62008ae-00ba-4b2b-b2d4-a36f45e38687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f9bf767ff-65nq2" podUID="c62008ae-00ba-4b2b-b2d4-a36f45e38687" Sep 13 00:03:17.854951 containerd[1482]: time="2025-09-13T00:03:17.854874148Z" level=error msg="Failed to destroy network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.855697 containerd[1482]: time="2025-09-13T00:03:17.855359409Z" level=error msg="encountered an error cleaning up failed sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.856177 containerd[1482]: time="2025-09-13T00:03:17.856136762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9bf767ff-9zn6h,Uid:760badfc-d54f-4563-b3b8-27b25aca93e2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.856730 kubelet[2551]: E0913 00:03:17.856587 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.856802 kubelet[2551]: E0913 00:03:17.856750 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f9bf767ff-9zn6h" Sep 13 00:03:17.856802 kubelet[2551]: E0913 00:03:17.856769 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f9bf767ff-9zn6h" Sep 13 00:03:17.856856 kubelet[2551]: E0913 00:03:17.856811 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f9bf767ff-9zn6h_calico-apiserver(760badfc-d54f-4563-b3b8-27b25aca93e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f9bf767ff-9zn6h_calico-apiserver(760badfc-d54f-4563-b3b8-27b25aca93e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f9bf767ff-9zn6h" podUID="760badfc-d54f-4563-b3b8-27b25aca93e2" Sep 13 00:03:17.882146 containerd[1482]: time="2025-09-13T00:03:17.882020874Z" level=error msg="Failed to destroy network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.883779 containerd[1482]: time="2025-09-13T00:03:17.883596622Z" level=error msg="encountered an error cleaning up failed sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.883779 containerd[1482]: time="2025-09-13T00:03:17.883669025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-qj7nd,Uid:9ec71789-a324-43ba-b57e-8169dfe5b109,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.883952 kubelet[2551]: E0913 00:03:17.883885 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.883995 kubelet[2551]: E0913 00:03:17.883975 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-qj7nd" Sep 13 00:03:17.884033 kubelet[2551]: E0913 00:03:17.883999 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-qj7nd" Sep 13 00:03:17.885921 kubelet[2551]: E0913 00:03:17.885221 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-qj7nd_calico-system(9ec71789-a324-43ba-b57e-8169dfe5b109)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-qj7nd_calico-system(9ec71789-a324-43ba-b57e-8169dfe5b109)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-qj7nd" podUID="9ec71789-a324-43ba-b57e-8169dfe5b109" Sep 13 00:03:17.897956 containerd[1482]: time="2025-09-13T00:03:17.895979154Z" level=error msg="Failed to destroy network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.897956 containerd[1482]: time="2025-09-13T00:03:17.897236688Z" level=error msg="encountered an error cleaning up failed sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.897956 containerd[1482]: time="2025-09-13T00:03:17.897295131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fbb95f8d-fhz2z,Uid:ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.899748 kubelet[2551]: E0913 00:03:17.899125 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.899748 kubelet[2551]: E0913 00:03:17.899199 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57fbb95f8d-fhz2z" Sep 13 00:03:17.899748 kubelet[2551]: E0913 00:03:17.899220 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57fbb95f8d-fhz2z" Sep 13 00:03:17.899990 kubelet[2551]: E0913 00:03:17.899262 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57fbb95f8d-fhz2z_calico-system(ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57fbb95f8d-fhz2z_calico-system(ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57fbb95f8d-fhz2z" podUID="ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451" Sep 13 00:03:17.903561 containerd[1482]: time="2025-09-13T00:03:17.903450995Z" level=error msg="Failed to destroy network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.904720 containerd[1482]: time="2025-09-13T00:03:17.904546642Z" level=error msg="encountered an error cleaning up failed sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.904720 containerd[1482]: time="2025-09-13T00:03:17.904615805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889f7d879-488cx,Uid:59113340-495e-48cd-a556-b5807ae1b886,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.905189 kubelet[2551]: E0913 00:03:17.905016 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:17.905189 kubelet[2551]: E0913 00:03:17.905077 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5889f7d879-488cx" Sep 13 00:03:17.905189 kubelet[2551]: E0913 00:03:17.905098 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5889f7d879-488cx" Sep 13 00:03:17.905318 kubelet[2551]: E0913 00:03:17.905143 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5889f7d879-488cx_calico-apiserver(59113340-495e-48cd-a556-b5807ae1b886)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5889f7d879-488cx_calico-apiserver(59113340-495e-48cd-a556-b5807ae1b886)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5889f7d879-488cx" podUID="59113340-495e-48cd-a556-b5807ae1b886" Sep 13 00:03:18.312950 kubelet[2551]: I0913 00:03:18.312819 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:18.314570 containerd[1482]: time="2025-09-13T00:03:18.314116438Z" level=info msg="StopPodSandbox for \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\"" Sep 13 00:03:18.314570 containerd[1482]: time="2025-09-13T00:03:18.314294206Z" level=info msg="Ensure that sandbox f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd in task-service has been cleanup successfully" Sep 13 00:03:18.316249 kubelet[2551]: I0913 00:03:18.316220 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:18.317173 containerd[1482]: time="2025-09-13T00:03:18.317140003Z" level=info msg="StopPodSandbox for \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\"" Sep 13 00:03:18.317427 containerd[1482]: time="2025-09-13T00:03:18.317402534Z" level=info msg="Ensure that sandbox ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1 in task-service has been cleanup successfully" Sep 13 00:03:18.321099 kubelet[2551]: I0913 00:03:18.321063 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:18.321974 containerd[1482]: time="2025-09-13T00:03:18.321937641Z" level=info msg="StopPodSandbox for \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\"" Sep 13 00:03:18.322935 containerd[1482]: time="2025-09-13T00:03:18.322675872Z" level=info msg="Ensure that sandbox 4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042 in task-service has been cleanup successfully" Sep 13 00:03:18.326664 kubelet[2551]: I0913 00:03:18.326608 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:18.329678 containerd[1482]: time="2025-09-13T00:03:18.328567075Z" level=info msg="StopPodSandbox for \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\"" Sep 13 00:03:18.329944 containerd[1482]: time="2025-09-13T00:03:18.329890210Z" level=info msg="Ensure that sandbox d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508 in task-service has been cleanup successfully" Sep 13 00:03:18.334193 kubelet[2551]: I0913 00:03:18.333040 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:18.335772 containerd[1482]: time="2025-09-13T00:03:18.335362356Z" level=info msg="StopPodSandbox for \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\"" Sep 13 00:03:18.336628 kubelet[2551]: I0913 00:03:18.336602 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:18.338889 containerd[1482]: time="2025-09-13T00:03:18.337274634Z" level=info msg="Ensure that sandbox 7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99 in task-service has been cleanup successfully" Sep 13 00:03:18.345304 containerd[1482]: time="2025-09-13T00:03:18.339094830Z" level=info msg="StopPodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\"" Sep 13 00:03:18.345611 containerd[1482]: time="2025-09-13T00:03:18.345589018Z" level=info msg="Ensure that sandbox ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8 in task-service has been cleanup successfully" Sep 13 00:03:18.355336 containerd[1482]: time="2025-09-13T00:03:18.353295096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:03:18.375634 kubelet[2551]: E0913 00:03:18.375604 2551 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:03:18.377623 kubelet[2551]: E0913 00:03:18.376969 2551 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d-config-volume podName:ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d nodeName:}" failed. No retries permitted until 2025-09-13 00:03:18.876945152 +0000 UTC m=+32.883435531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d-config-volume") pod "coredns-7c65d6cfc9-6fdv4" (UID: "ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:03:18.432524 containerd[1482]: time="2025-09-13T00:03:18.432468324Z" level=error msg="StopPodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" failed" error="failed to destroy network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:18.432842 kubelet[2551]: E0913 00:03:18.432797 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:18.432926 kubelet[2551]: E0913 00:03:18.432863 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8"} Sep 13 00:03:18.432971 kubelet[2551]: E0913 00:03:18.432954 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"760badfc-d54f-4563-b3b8-27b25aca93e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:18.433043 kubelet[2551]: E0913 00:03:18.432978 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"760badfc-d54f-4563-b3b8-27b25aca93e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f9bf767ff-9zn6h" podUID="760badfc-d54f-4563-b3b8-27b25aca93e2" Sep 13 00:03:18.438964 containerd[1482]: time="2025-09-13T00:03:18.438869029Z" level=error msg="StopPodSandbox for \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\" failed" error="failed to destroy network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:18.439878 kubelet[2551]: E0913 00:03:18.439455 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:18.439878 kubelet[2551]: E0913 00:03:18.439677 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99"} Sep 13 00:03:18.439878 kubelet[2551]: E0913 00:03:18.439710 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ec71789-a324-43ba-b57e-8169dfe5b109\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:18.439878 kubelet[2551]: E0913 00:03:18.439735 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ec71789-a324-43ba-b57e-8169dfe5b109\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-qj7nd" podUID="9ec71789-a324-43ba-b57e-8169dfe5b109" Sep 13 00:03:18.443646 containerd[1482]: time="2025-09-13T00:03:18.443603584Z" level=error msg="StopPodSandbox for \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\" failed" error="failed to destroy network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:18.444269 kubelet[2551]: E0913 00:03:18.444229 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:18.444326 kubelet[2551]: E0913 00:03:18.444279 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1"} Sep 13 00:03:18.444326 kubelet[2551]: E0913 00:03:18.444311 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:18.444427 kubelet[2551]: E0913 00:03:18.444334 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57fbb95f8d-fhz2z" podUID="ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451" Sep 13 00:03:18.446654 containerd[1482]: time="2025-09-13T00:03:18.446600868Z" level=error msg="StopPodSandbox for \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\" failed" error="failed to destroy network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:18.446947 kubelet[2551]: E0913 00:03:18.446847 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:18.447096 kubelet[2551]: E0913 00:03:18.447007 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd"} Sep 13 00:03:18.447096 kubelet[2551]: E0913 00:03:18.447043 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59113340-495e-48cd-a556-b5807ae1b886\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:18.447096 kubelet[2551]: E0913 00:03:18.447064 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59113340-495e-48cd-a556-b5807ae1b886\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5889f7d879-488cx" podUID="59113340-495e-48cd-a556-b5807ae1b886" Sep 13 00:03:18.449071 containerd[1482]: time="2025-09-13T00:03:18.449021528Z" level=error msg="StopPodSandbox for \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\" failed" error="failed to destroy network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:18.449360 kubelet[2551]: E0913 00:03:18.449325 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:18.449458 kubelet[2551]: E0913 00:03:18.449440 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042"} Sep 13 00:03:18.449629 kubelet[2551]: E0913 00:03:18.449604 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5bc73e8e-3e70-4fad-b2fe-fc395c348d63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:18.449725 kubelet[2551]: E0913 00:03:18.449705 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5bc73e8e-3e70-4fad-b2fe-fc395c348d63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5559ff8dc6-m5kcg" podUID="5bc73e8e-3e70-4fad-b2fe-fc395c348d63" Sep 13 00:03:18.453345 containerd[1482]: time="2025-09-13T00:03:18.453289184Z" level=error msg="StopPodSandbox for \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\" failed" error="failed to destroy network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:18.453779 kubelet[2551]: E0913 00:03:18.453730 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:18.453883 kubelet[2551]: E0913 00:03:18.453785 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508"} Sep 13 00:03:18.453883 kubelet[2551]: E0913 00:03:18.453831 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c62008ae-00ba-4b2b-b2d4-a36f45e38687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:18.453883 kubelet[2551]: E0913 00:03:18.453851 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c62008ae-00ba-4b2b-b2d4-a36f45e38687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f9bf767ff-65nq2" podUID="c62008ae-00ba-4b2b-b2d4-a36f45e38687" Sep 13 00:03:18.477878 kubelet[2551]: E0913 00:03:18.475468 2551 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 13 00:03:18.477878 kubelet[2551]: E0913 00:03:18.475612 2551 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8a8ab913-32e4-496a-82e5-7f7a7a768ff5-config-volume podName:8a8ab913-32e4-496a-82e5-7f7a7a768ff5 nodeName:}" failed. No retries permitted until 2025-09-13 00:03:18.975580344 +0000 UTC m=+32.982070803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8a8ab913-32e4-496a-82e5-7f7a7a768ff5-config-volume") pod "coredns-7c65d6cfc9-s5g7w" (UID: "8a8ab913-32e4-496a-82e5-7f7a7a768ff5") : failed to sync configmap cache: timed out waiting for the condition Sep 13 00:03:18.611974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042-shm.mount: Deactivated successfully. Sep 13 00:03:18.612145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8-shm.mount: Deactivated successfully. Sep 13 00:03:18.612259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508-shm.mount: Deactivated successfully. Sep 13 00:03:19.083556 containerd[1482]: time="2025-09-13T00:03:19.083500030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6fdv4,Uid:ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d,Namespace:kube-system,Attempt:0,}" Sep 13 00:03:19.148071 containerd[1482]: time="2025-09-13T00:03:19.147691579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-s5g7w,Uid:8a8ab913-32e4-496a-82e5-7f7a7a768ff5,Namespace:kube-system,Attempt:0,}" Sep 13 00:03:19.151914 containerd[1482]: time="2025-09-13T00:03:19.151837223Z" level=error msg="Failed to destroy network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.152621 containerd[1482]: time="2025-09-13T00:03:19.152243919Z" level=error msg="encountered an error cleaning up failed sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.154041 containerd[1482]: time="2025-09-13T00:03:19.153981148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6fdv4,Uid:ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.155945 kubelet[2551]: E0913 00:03:19.154202 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.155945 kubelet[2551]: E0913 00:03:19.154256 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6fdv4" Sep 13 00:03:19.155945 kubelet[2551]: E0913 00:03:19.154280 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6fdv4" Sep 13 00:03:19.154681 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf-shm.mount: Deactivated successfully. Sep 13 00:03:19.156130 kubelet[2551]: E0913 00:03:19.154324 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6fdv4_kube-system(ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6fdv4_kube-system(ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6fdv4" podUID="ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d" Sep 13 00:03:19.172586 systemd[1]: Created slice kubepods-besteffort-pod96d6c8b4_b0e0_4570_9741_27188bdbb60e.slice - libcontainer container kubepods-besteffort-pod96d6c8b4_b0e0_4570_9741_27188bdbb60e.slice. Sep 13 00:03:19.176554 containerd[1482]: time="2025-09-13T00:03:19.176396478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54r9g,Uid:96d6c8b4-b0e0-4570-9741-27188bdbb60e,Namespace:calico-system,Attempt:0,}" Sep 13 00:03:19.229179 containerd[1482]: time="2025-09-13T00:03:19.229063089Z" level=error msg="Failed to destroy network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.229747 containerd[1482]: time="2025-09-13T00:03:19.229577109Z" level=error msg="encountered an error cleaning up failed sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.229747 containerd[1482]: time="2025-09-13T00:03:19.229641552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-s5g7w,Uid:8a8ab913-32e4-496a-82e5-7f7a7a768ff5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.230545 kubelet[2551]: E0913 00:03:19.230178 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.230545 kubelet[2551]: E0913 00:03:19.230231 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-s5g7w" Sep 13 00:03:19.230545 kubelet[2551]: E0913 00:03:19.230252 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-s5g7w" Sep 13 00:03:19.230668 kubelet[2551]: E0913 00:03:19.230289 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-s5g7w_kube-system(8a8ab913-32e4-496a-82e5-7f7a7a768ff5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-s5g7w_kube-system(8a8ab913-32e4-496a-82e5-7f7a7a768ff5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-s5g7w" podUID="8a8ab913-32e4-496a-82e5-7f7a7a768ff5" Sep 13 00:03:19.255672 containerd[1482]: time="2025-09-13T00:03:19.255613263Z" level=error msg="Failed to destroy network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.256219 containerd[1482]: time="2025-09-13T00:03:19.256182406Z" level=error msg="encountered an error cleaning up failed sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.256359 containerd[1482]: time="2025-09-13T00:03:19.256337012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54r9g,Uid:96d6c8b4-b0e0-4570-9741-27188bdbb60e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.257130 kubelet[2551]: E0913 00:03:19.256735 2551 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.257130 kubelet[2551]: E0913 00:03:19.256790 2551 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-54r9g" Sep 13 00:03:19.257130 kubelet[2551]: E0913 00:03:19.256808 2551 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-54r9g" Sep 13 00:03:19.257305 kubelet[2551]: E0913 00:03:19.256876 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-54r9g_calico-system(96d6c8b4-b0e0-4570-9741-27188bdbb60e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-54r9g_calico-system(96d6c8b4-b0e0-4570-9741-27188bdbb60e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-54r9g" podUID="96d6c8b4-b0e0-4570-9741-27188bdbb60e" Sep 13 00:03:19.354112 kubelet[2551]: I0913 00:03:19.353398 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:19.354614 containerd[1482]: time="2025-09-13T00:03:19.354152055Z" level=info msg="StopPodSandbox for \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\"" Sep 13 00:03:19.354614 containerd[1482]: time="2025-09-13T00:03:19.354367023Z" level=info msg="Ensure that sandbox b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b in task-service has been cleanup successfully" Sep 13 00:03:19.357594 kubelet[2551]: I0913 00:03:19.357569 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:19.358091 containerd[1482]: time="2025-09-13T00:03:19.358065010Z" level=info msg="StopPodSandbox for \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\"" Sep 13 00:03:19.358876 containerd[1482]: time="2025-09-13T00:03:19.358851841Z" level=info msg="Ensure that sandbox 52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3 in task-service has been cleanup successfully" Sep 13 00:03:19.360460 kubelet[2551]: I0913 00:03:19.360206 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:19.361214 containerd[1482]: time="2025-09-13T00:03:19.361034568Z" level=info msg="StopPodSandbox for \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\"" Sep 13 00:03:19.361678 containerd[1482]: time="2025-09-13T00:03:19.361349820Z" level=info msg="Ensure that sandbox ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf in task-service has been cleanup successfully" Sep 13 00:03:19.398849 containerd[1482]: time="2025-09-13T00:03:19.398749665Z" level=error msg="StopPodSandbox for \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\" failed" error="failed to destroy network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.399512 kubelet[2551]: E0913 00:03:19.399168 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:19.399512 kubelet[2551]: E0913 00:03:19.399217 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b"} Sep 13 00:03:19.399512 kubelet[2551]: E0913 00:03:19.399286 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a8ab913-32e4-496a-82e5-7f7a7a768ff5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:19.399512 kubelet[2551]: E0913 00:03:19.399311 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a8ab913-32e4-496a-82e5-7f7a7a768ff5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-s5g7w" podUID="8a8ab913-32e4-496a-82e5-7f7a7a768ff5" Sep 13 00:03:19.405180 containerd[1482]: time="2025-09-13T00:03:19.405140079Z" level=error msg="StopPodSandbox for \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\" failed" error="failed to destroy network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.405928 kubelet[2551]: E0913 00:03:19.405734 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:19.405928 kubelet[2551]: E0913 00:03:19.405784 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf"} Sep 13 00:03:19.405928 kubelet[2551]: E0913 00:03:19.405819 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:19.405928 kubelet[2551]: E0913 00:03:19.405840 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6fdv4" podUID="ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d" Sep 13 00:03:19.408033 containerd[1482]: time="2025-09-13T00:03:19.407930550Z" level=error msg="StopPodSandbox for \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\" failed" error="failed to destroy network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:03:19.408214 kubelet[2551]: E0913 00:03:19.408144 2551 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:19.408214 kubelet[2551]: E0913 00:03:19.408191 2551 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3"} Sep 13 00:03:19.408319 kubelet[2551]: E0913 00:03:19.408221 2551 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96d6c8b4-b0e0-4570-9741-27188bdbb60e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:03:19.408319 kubelet[2551]: E0913 00:03:19.408241 2551 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96d6c8b4-b0e0-4570-9741-27188bdbb60e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-54r9g" podUID="96d6c8b4-b0e0-4570-9741-27188bdbb60e" Sep 13 00:03:19.611320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b-shm.mount: Deactivated successfully. Sep 13 00:03:22.226946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925827556.mount: Deactivated successfully. Sep 13 00:03:22.263923 containerd[1482]: time="2025-09-13T00:03:22.263836577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:22.265391 containerd[1482]: time="2025-09-13T00:03:22.265235706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 00:03:22.267631 containerd[1482]: time="2025-09-13T00:03:22.266971768Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:22.270344 containerd[1482]: time="2025-09-13T00:03:22.270296446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:22.271079 containerd[1482]: time="2025-09-13T00:03:22.271037792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.917162793s" Sep 13 00:03:22.271079 containerd[1482]: time="2025-09-13T00:03:22.271077074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:03:22.290095 containerd[1482]: time="2025-09-13T00:03:22.290061188Z" level=info msg="CreateContainer within sandbox \"bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:03:22.317675 containerd[1482]: time="2025-09-13T00:03:22.317457161Z" level=info msg="CreateContainer within sandbox \"bb619e35cabfe62da4030e26faeb80aa96f8a50d2c3d4defbf61fc291d9c6307\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0\"" Sep 13 00:03:22.318931 containerd[1482]: time="2025-09-13T00:03:22.318683244Z" level=info msg="StartContainer for \"83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0\"" Sep 13 00:03:22.352243 systemd[1]: Started cri-containerd-83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0.scope - libcontainer container 83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0. Sep 13 00:03:22.390276 containerd[1482]: time="2025-09-13T00:03:22.390116781Z" level=info msg="StartContainer for \"83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0\" returns successfully" Sep 13 00:03:22.538986 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:03:22.539735 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:03:22.684683 containerd[1482]: time="2025-09-13T00:03:22.683021983Z" level=info msg="StopPodSandbox for \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\"" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.778 [INFO][3748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.778 [INFO][3748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" iface="eth0" netns="/var/run/netns/cni-351b99a2-7654-1327-9683-37b9b18c74bf" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.781 [INFO][3748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" iface="eth0" netns="/var/run/netns/cni-351b99a2-7654-1327-9683-37b9b18c74bf" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.782 [INFO][3748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" iface="eth0" netns="/var/run/netns/cni-351b99a2-7654-1327-9683-37b9b18c74bf" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.782 [INFO][3748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.782 [INFO][3748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.857 [INFO][3761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.858 [INFO][3761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.858 [INFO][3761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.869 [WARNING][3761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.869 [INFO][3761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.872 [INFO][3761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:22.881538 containerd[1482]: 2025-09-13 00:03:22.878 [INFO][3748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:22.883106 containerd[1482]: time="2025-09-13T00:03:22.883060527Z" level=info msg="TearDown network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\" successfully" Sep 13 00:03:22.883106 containerd[1482]: time="2025-09-13T00:03:22.883101088Z" level=info msg="StopPodSandbox for \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\" returns successfully" Sep 13 00:03:23.012162 kubelet[2551]: I0913 00:03:23.012078 2551 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-whisker-backend-key-pair\") pod \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\" (UID: \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\") " Sep 13 00:03:23.012162 kubelet[2551]: I0913 00:03:23.012170 2551 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-whisker-ca-bundle\") pod \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\" (UID: \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\") " Sep 13 00:03:23.012872 kubelet[2551]: I0913 00:03:23.012208 2551 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mffmh\" (UniqueName: \"kubernetes.io/projected/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-kube-api-access-mffmh\") pod \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\" (UID: \"ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451\") " Sep 13 00:03:23.013978 kubelet[2551]: I0913 00:03:23.013757 2551 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451" (UID: "ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:03:23.017235 kubelet[2551]: I0913 00:03:23.017128 2551 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451" (UID: "ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:03:23.017377 kubelet[2551]: I0913 00:03:23.017246 2551 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-kube-api-access-mffmh" (OuterVolumeSpecName: "kube-api-access-mffmh") pod "ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451" (UID: "ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451"). InnerVolumeSpecName "kube-api-access-mffmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:03:23.112806 kubelet[2551]: I0913 00:03:23.112677 2551 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-whisker-ca-bundle\") on node \"ci-4081-3-5-n-d78c7abf5e\" DevicePath \"\"" Sep 13 00:03:23.112806 kubelet[2551]: I0913 00:03:23.112736 2551 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mffmh\" (UniqueName: \"kubernetes.io/projected/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-kube-api-access-mffmh\") on node \"ci-4081-3-5-n-d78c7abf5e\" DevicePath \"\"" Sep 13 00:03:23.112806 kubelet[2551]: I0913 00:03:23.112763 2551 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451-whisker-backend-key-pair\") on node \"ci-4081-3-5-n-d78c7abf5e\" DevicePath \"\"" Sep 13 00:03:23.227169 systemd[1]: run-netns-cni\x2d351b99a2\x2d7654\x2d1327\x2d9683\x2d37b9b18c74bf.mount: Deactivated successfully. Sep 13 00:03:23.227283 systemd[1]: var-lib-kubelet-pods-ce5c1cdb\x2d0fa6\x2d4674\x2d884e\x2dc8bb9a2b4451-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmffmh.mount: Deactivated successfully. Sep 13 00:03:23.227340 systemd[1]: var-lib-kubelet-pods-ce5c1cdb\x2d0fa6\x2d4674\x2d884e\x2dc8bb9a2b4451-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:03:23.385408 systemd[1]: Removed slice kubepods-besteffort-podce5c1cdb_0fa6_4674_884e_c8bb9a2b4451.slice - libcontainer container kubepods-besteffort-podce5c1cdb_0fa6_4674_884e_c8bb9a2b4451.slice. Sep 13 00:03:23.400942 kubelet[2551]: I0913 00:03:23.400353 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dp86n" podStartSLOduration=1.94427794 podStartE2EDuration="14.400335127s" podCreationTimestamp="2025-09-13 00:03:09 +0000 UTC" firstStartedPulling="2025-09-13 00:03:09.816600223 +0000 UTC m=+23.823090602" lastFinishedPulling="2025-09-13 00:03:22.27265733 +0000 UTC m=+36.279147789" observedRunningTime="2025-09-13 00:03:23.398533465 +0000 UTC m=+37.405023924" watchObservedRunningTime="2025-09-13 00:03:23.400335127 +0000 UTC m=+37.406825546" Sep 13 00:03:23.468787 systemd[1]: Created slice kubepods-besteffort-pod223b550e_9a38_4830_9a78_fe6c0299805e.slice - libcontainer container kubepods-besteffort-pod223b550e_9a38_4830_9a78_fe6c0299805e.slice. Sep 13 00:03:23.616405 kubelet[2551]: I0913 00:03:23.616140 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/223b550e-9a38-4830-9a78-fe6c0299805e-whisker-ca-bundle\") pod \"whisker-8fcf8c895-tqdxk\" (UID: \"223b550e-9a38-4830-9a78-fe6c0299805e\") " pod="calico-system/whisker-8fcf8c895-tqdxk" Sep 13 00:03:23.616405 kubelet[2551]: I0913 00:03:23.616245 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/223b550e-9a38-4830-9a78-fe6c0299805e-whisker-backend-key-pair\") pod \"whisker-8fcf8c895-tqdxk\" (UID: \"223b550e-9a38-4830-9a78-fe6c0299805e\") " pod="calico-system/whisker-8fcf8c895-tqdxk" Sep 13 00:03:23.616405 kubelet[2551]: I0913 00:03:23.616287 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz2rw\" (UniqueName: \"kubernetes.io/projected/223b550e-9a38-4830-9a78-fe6c0299805e-kube-api-access-pz2rw\") pod \"whisker-8fcf8c895-tqdxk\" (UID: \"223b550e-9a38-4830-9a78-fe6c0299805e\") " pod="calico-system/whisker-8fcf8c895-tqdxk" Sep 13 00:03:23.774375 containerd[1482]: time="2025-09-13T00:03:23.774241107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8fcf8c895-tqdxk,Uid:223b550e-9a38-4830-9a78-fe6c0299805e,Namespace:calico-system,Attempt:0,}" Sep 13 00:03:23.932830 systemd-networkd[1354]: cali66944ac8db6: Link UP Sep 13 00:03:23.934789 systemd-networkd[1354]: cali66944ac8db6: Gained carrier Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.822 [INFO][3786] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.845 [INFO][3786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0 whisker-8fcf8c895- calico-system 223b550e-9a38-4830-9a78-fe6c0299805e 908 0 2025-09-13 00:03:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8fcf8c895 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e whisker-8fcf8c895-tqdxk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali66944ac8db6 [] [] }} ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Namespace="calico-system" Pod="whisker-8fcf8c895-tqdxk" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.845 [INFO][3786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Namespace="calico-system" Pod="whisker-8fcf8c895-tqdxk" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.875 [INFO][3797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" HandleID="k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.875 [INFO][3797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" HandleID="k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"whisker-8fcf8c895-tqdxk", "timestamp":"2025-09-13 00:03:23.875291292 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.875 [INFO][3797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.875 [INFO][3797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.875 [INFO][3797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.888 [INFO][3797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.894 [INFO][3797] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.900 [INFO][3797] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.902 [INFO][3797] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.905 [INFO][3797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.905 [INFO][3797] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.907 [INFO][3797] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.913 [INFO][3797] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.920 [INFO][3797] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.129/26] block=192.168.28.128/26 handle="k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.920 [INFO][3797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.129/26] handle="k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.920 [INFO][3797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:23.957109 containerd[1482]: 2025-09-13 00:03:23.920 [INFO][3797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.129/26] IPv6=[] ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" HandleID="k8s-pod-network.72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" Sep 13 00:03:23.958289 containerd[1482]: 2025-09-13 00:03:23.923 [INFO][3786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Namespace="calico-system" Pod="whisker-8fcf8c895-tqdxk" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0", GenerateName:"whisker-8fcf8c895-", Namespace:"calico-system", SelfLink:"", UID:"223b550e-9a38-4830-9a78-fe6c0299805e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8fcf8c895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"whisker-8fcf8c895-tqdxk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66944ac8db6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:23.958289 containerd[1482]: 2025-09-13 00:03:23.924 [INFO][3786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.129/32] ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Namespace="calico-system" Pod="whisker-8fcf8c895-tqdxk" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" Sep 13 00:03:23.958289 containerd[1482]: 2025-09-13 00:03:23.924 [INFO][3786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66944ac8db6 ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Namespace="calico-system" Pod="whisker-8fcf8c895-tqdxk" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" Sep 13 00:03:23.958289 containerd[1482]: 2025-09-13 00:03:23.935 [INFO][3786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Namespace="calico-system" Pod="whisker-8fcf8c895-tqdxk" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" Sep 13 00:03:23.958289 containerd[1482]: 2025-09-13 00:03:23.935 [INFO][3786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Namespace="calico-system" Pod="whisker-8fcf8c895-tqdxk" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0", GenerateName:"whisker-8fcf8c895-", Namespace:"calico-system", SelfLink:"", UID:"223b550e-9a38-4830-9a78-fe6c0299805e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8fcf8c895", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f", Pod:"whisker-8fcf8c895-tqdxk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66944ac8db6", MAC:"2e:b7:b9:7d:e8:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:23.958289 containerd[1482]: 2025-09-13 00:03:23.954 [INFO][3786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f" Namespace="calico-system" Pod="whisker-8fcf8c895-tqdxk" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--8fcf8c895--tqdxk-eth0" Sep 13 00:03:23.982710 containerd[1482]: time="2025-09-13T00:03:23.981968750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:23.982963 containerd[1482]: time="2025-09-13T00:03:23.982752537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:23.983067 containerd[1482]: time="2025-09-13T00:03:23.982783138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:23.983438 containerd[1482]: time="2025-09-13T00:03:23.983369718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:24.002129 systemd[1]: Started cri-containerd-72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f.scope - libcontainer container 72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f. Sep 13 00:03:24.081823 containerd[1482]: time="2025-09-13T00:03:24.081768599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8fcf8c895-tqdxk,Uid:223b550e-9a38-4830-9a78-fe6c0299805e,Namespace:calico-system,Attempt:0,} returns sandbox id \"72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f\"" Sep 13 00:03:24.085543 containerd[1482]: time="2025-09-13T00:03:24.085477442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:03:24.163147 kubelet[2551]: I0913 00:03:24.162944 2551 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451" path="/var/lib/kubelet/pods/ce5c1cdb-0fa6-4674-884e-c8bb9a2b4451/volumes" Sep 13 00:03:24.380621 kubelet[2551]: I0913 00:03:24.380591 2551 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:03:24.439960 kernel: bpftool[3974]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:03:24.642253 systemd-networkd[1354]: vxlan.calico: Link UP Sep 13 00:03:24.642261 systemd-networkd[1354]: vxlan.calico: Gained carrier Sep 13 00:03:25.826245 containerd[1482]: time="2025-09-13T00:03:25.826188001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:25.828958 containerd[1482]: time="2025-09-13T00:03:25.828920569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 00:03:25.830111 containerd[1482]: time="2025-09-13T00:03:25.830068486Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:25.836414 containerd[1482]: time="2025-09-13T00:03:25.834825798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:25.836414 containerd[1482]: time="2025-09-13T00:03:25.836041957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.750515754s" Sep 13 00:03:25.836414 containerd[1482]: time="2025-09-13T00:03:25.836071638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:03:25.853926 containerd[1482]: time="2025-09-13T00:03:25.853857529Z" level=info msg="CreateContainer within sandbox \"72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:03:25.871308 containerd[1482]: time="2025-09-13T00:03:25.871241046Z" level=info msg="CreateContainer within sandbox \"72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2a4c3059abc4d9ea1e8be4128fe17669567048c62a316ea340872ba000e2abf0\"" Sep 13 00:03:25.872815 containerd[1482]: time="2025-09-13T00:03:25.872633011Z" level=info msg="StartContainer for \"2a4c3059abc4d9ea1e8be4128fe17669567048c62a316ea340872ba000e2abf0\"" Sep 13 00:03:25.924483 systemd[1]: run-containerd-runc-k8s.io-2a4c3059abc4d9ea1e8be4128fe17669567048c62a316ea340872ba000e2abf0-runc.WjVkdG.mount: Deactivated successfully. Sep 13 00:03:25.933352 systemd[1]: Started cri-containerd-2a4c3059abc4d9ea1e8be4128fe17669567048c62a316ea340872ba000e2abf0.scope - libcontainer container 2a4c3059abc4d9ea1e8be4128fe17669567048c62a316ea340872ba000e2abf0. Sep 13 00:03:25.972858 containerd[1482]: time="2025-09-13T00:03:25.972772541Z" level=info msg="StartContainer for \"2a4c3059abc4d9ea1e8be4128fe17669567048c62a316ea340872ba000e2abf0\" returns successfully" Sep 13 00:03:25.975921 containerd[1482]: time="2025-09-13T00:03:25.975133217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:03:25.999987 systemd-networkd[1354]: cali66944ac8db6: Gained IPv6LL Sep 13 00:03:26.002237 systemd-networkd[1354]: vxlan.calico: Gained IPv6LL Sep 13 00:03:28.923681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883628194.mount: Deactivated successfully. Sep 13 00:03:28.943437 containerd[1482]: time="2025-09-13T00:03:28.942641486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:28.944036 containerd[1482]: time="2025-09-13T00:03:28.944009766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 00:03:28.944458 containerd[1482]: time="2025-09-13T00:03:28.944435698Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:28.947367 containerd[1482]: time="2025-09-13T00:03:28.947335903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:28.948194 containerd[1482]: time="2025-09-13T00:03:28.948163087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.972989309s" Sep 13 00:03:28.948375 containerd[1482]: time="2025-09-13T00:03:28.948277690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:03:28.953395 containerd[1482]: time="2025-09-13T00:03:28.953349159Z" level=info msg="CreateContainer within sandbox \"72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:03:28.973216 containerd[1482]: time="2025-09-13T00:03:28.973154377Z" level=info msg="CreateContainer within sandbox \"72bd03bb2ad29fad5d4b0b24f5c8acdad74334a1b7412448945eeeea30778b3f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4ca24b03b6390fffa90c53db9d8e83e5ade1012b328a232bc95ee8594e5f57ec\"" Sep 13 00:03:28.974509 containerd[1482]: time="2025-09-13T00:03:28.974473056Z" level=info msg="StartContainer for \"4ca24b03b6390fffa90c53db9d8e83e5ade1012b328a232bc95ee8594e5f57ec\"" Sep 13 00:03:29.043111 systemd[1]: Started cri-containerd-4ca24b03b6390fffa90c53db9d8e83e5ade1012b328a232bc95ee8594e5f57ec.scope - libcontainer container 4ca24b03b6390fffa90c53db9d8e83e5ade1012b328a232bc95ee8594e5f57ec. Sep 13 00:03:29.102357 containerd[1482]: time="2025-09-13T00:03:29.102218545Z" level=info msg="StartContainer for \"4ca24b03b6390fffa90c53db9d8e83e5ade1012b328a232bc95ee8594e5f57ec\" returns successfully" Sep 13 00:03:30.160104 containerd[1482]: time="2025-09-13T00:03:30.159175749Z" level=info msg="StopPodSandbox for \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\"" Sep 13 00:03:30.161445 containerd[1482]: time="2025-09-13T00:03:30.160966479Z" level=info msg="StopPodSandbox for \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\"" Sep 13 00:03:30.229946 kubelet[2551]: I0913 00:03:30.229855 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8fcf8c895-tqdxk" podStartSLOduration=2.365626209 podStartE2EDuration="7.2298353s" podCreationTimestamp="2025-09-13 00:03:23 +0000 UTC" firstStartedPulling="2025-09-13 00:03:24.085087069 +0000 UTC m=+38.091577488" lastFinishedPulling="2025-09-13 00:03:28.94929616 +0000 UTC m=+42.955786579" observedRunningTime="2025-09-13 00:03:29.433420187 +0000 UTC m=+43.439910606" watchObservedRunningTime="2025-09-13 00:03:30.2298353 +0000 UTC m=+44.236325679" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.231 [INFO][4160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.232 [INFO][4160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" iface="eth0" netns="/var/run/netns/cni-9a28b511-3dd5-cc9d-c96d-0cbd57db8687" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.233 [INFO][4160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" iface="eth0" netns="/var/run/netns/cni-9a28b511-3dd5-cc9d-c96d-0cbd57db8687" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.233 [INFO][4160] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" iface="eth0" netns="/var/run/netns/cni-9a28b511-3dd5-cc9d-c96d-0cbd57db8687" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.234 [INFO][4160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.234 [INFO][4160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.258 [INFO][4173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.258 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.258 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.270 [WARNING][4173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.270 [INFO][4173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.274 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:30.280424 containerd[1482]: 2025-09-13 00:03:30.276 [INFO][4160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:30.285271 containerd[1482]: time="2025-09-13T00:03:30.280532180Z" level=info msg="TearDown network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\" successfully" Sep 13 00:03:30.285271 containerd[1482]: time="2025-09-13T00:03:30.280564541Z" level=info msg="StopPodSandbox for \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\" returns successfully" Sep 13 00:03:30.285271 containerd[1482]: time="2025-09-13T00:03:30.284926301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9bf767ff-65nq2,Uid:c62008ae-00ba-4b2b-b2d4-a36f45e38687,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:03:30.286356 systemd[1]: run-netns-cni\x2d9a28b511\x2d3dd5\x2dcc9d\x2dc96d\x2d0cbd57db8687.mount: Deactivated successfully. Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.234 [INFO][4161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.234 [INFO][4161] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" iface="eth0" netns="/var/run/netns/cni-41419eb1-c8e0-46a8-afc1-1314abf998cd" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.235 [INFO][4161] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" iface="eth0" netns="/var/run/netns/cni-41419eb1-c8e0-46a8-afc1-1314abf998cd" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.235 [INFO][4161] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" iface="eth0" netns="/var/run/netns/cni-41419eb1-c8e0-46a8-afc1-1314abf998cd" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.235 [INFO][4161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.235 [INFO][4161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.260 [INFO][4178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.260 [INFO][4178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.275 [INFO][4178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.293 [WARNING][4178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.293 [INFO][4178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.296 [INFO][4178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:30.302233 containerd[1482]: 2025-09-13 00:03:30.300 [INFO][4161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:30.305385 containerd[1482]: time="2025-09-13T00:03:30.304985655Z" level=info msg="TearDown network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\" successfully" Sep 13 00:03:30.305385 containerd[1482]: time="2025-09-13T00:03:30.305018736Z" level=info msg="StopPodSandbox for \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\" returns successfully" Sep 13 00:03:30.304961 systemd[1]: run-netns-cni\x2d41419eb1\x2dc8e0\x2d46a8\x2dafc1\x2d1314abf998cd.mount: Deactivated successfully. Sep 13 00:03:30.307161 containerd[1482]: time="2025-09-13T00:03:30.306733903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-qj7nd,Uid:9ec71789-a324-43ba-b57e-8169dfe5b109,Namespace:calico-system,Attempt:1,}" Sep 13 00:03:30.499385 systemd-networkd[1354]: cali69eec54acef: Link UP Sep 13 00:03:30.501537 systemd-networkd[1354]: cali69eec54acef: Gained carrier Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.380 [INFO][4187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0 calico-apiserver-f9bf767ff- calico-apiserver c62008ae-00ba-4b2b-b2d4-a36f45e38687 943 0 2025-09-13 00:03:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f9bf767ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e calico-apiserver-f9bf767ff-65nq2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali69eec54acef [] [] }} ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-65nq2" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.380 [INFO][4187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-65nq2" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.426 [INFO][4211] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.426 [INFO][4211] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"calico-apiserver-f9bf767ff-65nq2", "timestamp":"2025-09-13 00:03:30.426463449 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.426 [INFO][4211] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.426 [INFO][4211] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.426 [INFO][4211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.442 [INFO][4211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.450 [INFO][4211] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.455 [INFO][4211] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.457 [INFO][4211] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.459 [INFO][4211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.459 [INFO][4211] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.462 [INFO][4211] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771 Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.468 [INFO][4211] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.481 [INFO][4211] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.130/26] block=192.168.28.128/26 handle="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.481 [INFO][4211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.130/26] handle="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.481 [INFO][4211] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:30.524978 containerd[1482]: 2025-09-13 00:03:30.481 [INFO][4211] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.130/26] IPv6=[] ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.525502 containerd[1482]: 2025-09-13 00:03:30.486 [INFO][4187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-65nq2" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0", GenerateName:"calico-apiserver-f9bf767ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"c62008ae-00ba-4b2b-b2d4-a36f45e38687", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9bf767ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"calico-apiserver-f9bf767ff-65nq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69eec54acef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:30.525502 containerd[1482]: 2025-09-13 00:03:30.486 [INFO][4187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.130/32] ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-65nq2" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.525502 containerd[1482]: 2025-09-13 00:03:30.486 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69eec54acef ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-65nq2" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.525502 containerd[1482]: 2025-09-13 00:03:30.499 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-65nq2" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.525502 containerd[1482]: 2025-09-13 00:03:30.504 [INFO][4187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-65nq2" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0", GenerateName:"calico-apiserver-f9bf767ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"c62008ae-00ba-4b2b-b2d4-a36f45e38687", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9bf767ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771", Pod:"calico-apiserver-f9bf767ff-65nq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69eec54acef", MAC:"a6:a1:e2:a3:0f:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:30.525502 containerd[1482]: 2025-09-13 00:03:30.515 [INFO][4187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-65nq2" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:30.554986 containerd[1482]: time="2025-09-13T00:03:30.553670881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:30.554986 containerd[1482]: time="2025-09-13T00:03:30.553824726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:30.554986 containerd[1482]: time="2025-09-13T00:03:30.553860087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:30.554986 containerd[1482]: time="2025-09-13T00:03:30.554032531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:30.589167 systemd[1]: Started cri-containerd-dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771.scope - libcontainer container dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771. Sep 13 00:03:30.611655 systemd-networkd[1354]: calie094c1e89f0: Link UP Sep 13 00:03:30.614483 systemd-networkd[1354]: calie094c1e89f0: Gained carrier Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.387 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0 goldmane-7988f88666- calico-system 9ec71789-a324-43ba-b57e-8169dfe5b109 944 0 2025-09-13 00:03:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e goldmane-7988f88666-qj7nd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie094c1e89f0 [] [] }} ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Namespace="calico-system" Pod="goldmane-7988f88666-qj7nd" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.387 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Namespace="calico-system" Pod="goldmane-7988f88666-qj7nd" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.428 [INFO][4217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" HandleID="k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.428 [INFO][4217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" HandleID="k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"goldmane-7988f88666-qj7nd", "timestamp":"2025-09-13 00:03:30.428820034 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.429 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.482 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.482 [INFO][4217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.543 [INFO][4217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.557 [INFO][4217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.564 [INFO][4217] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.567 [INFO][4217] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.572 [INFO][4217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.572 [INFO][4217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.575 [INFO][4217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422 Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.584 [INFO][4217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.598 [INFO][4217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.131/26] block=192.168.28.128/26 handle="k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.599 [INFO][4217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.131/26] handle="k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.599 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:30.633476 containerd[1482]: 2025-09-13 00:03:30.599 [INFO][4217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.131/26] IPv6=[] ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" HandleID="k8s-pod-network.3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.634149 containerd[1482]: 2025-09-13 00:03:30.604 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Namespace="calico-system" Pod="goldmane-7988f88666-qj7nd" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"9ec71789-a324-43ba-b57e-8169dfe5b109", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"goldmane-7988f88666-qj7nd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie094c1e89f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:30.634149 containerd[1482]: 2025-09-13 00:03:30.605 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.131/32] ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Namespace="calico-system" Pod="goldmane-7988f88666-qj7nd" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.634149 containerd[1482]: 2025-09-13 00:03:30.606 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie094c1e89f0 ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Namespace="calico-system" Pod="goldmane-7988f88666-qj7nd" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.634149 containerd[1482]: 2025-09-13 00:03:30.614 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Namespace="calico-system" Pod="goldmane-7988f88666-qj7nd" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.634149 containerd[1482]: 2025-09-13 00:03:30.616 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Namespace="calico-system" Pod="goldmane-7988f88666-qj7nd" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"9ec71789-a324-43ba-b57e-8169dfe5b109", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422", Pod:"goldmane-7988f88666-qj7nd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie094c1e89f0", MAC:"4e:af:b4:28:9a:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:30.634149 containerd[1482]: 2025-09-13 00:03:30.631 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422" Namespace="calico-system" Pod="goldmane-7988f88666-qj7nd" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:30.658807 containerd[1482]: time="2025-09-13T00:03:30.658112765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:30.659224 containerd[1482]: time="2025-09-13T00:03:30.659043831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:30.659224 containerd[1482]: time="2025-09-13T00:03:30.659105033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:30.659450 containerd[1482]: time="2025-09-13T00:03:30.659349039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:30.685385 systemd[1]: Started cri-containerd-3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422.scope - libcontainer container 3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422. Sep 13 00:03:30.687819 containerd[1482]: time="2025-09-13T00:03:30.687780144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9bf767ff-65nq2,Uid:c62008ae-00ba-4b2b-b2d4-a36f45e38687,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\"" Sep 13 00:03:30.690407 containerd[1482]: time="2025-09-13T00:03:30.690225772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:03:30.739793 containerd[1482]: time="2025-09-13T00:03:30.739029599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-qj7nd,Uid:9ec71789-a324-43ba-b57e-8169dfe5b109,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422\"" Sep 13 00:03:31.458417 kubelet[2551]: I0913 00:03:31.457825 2551 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:03:32.160388 containerd[1482]: time="2025-09-13T00:03:32.160239205Z" level=info msg="StopPodSandbox for \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\"" Sep 13 00:03:32.164025 containerd[1482]: time="2025-09-13T00:03:32.161578440Z" level=info msg="StopPodSandbox for \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\"" Sep 13 00:03:32.169098 containerd[1482]: time="2025-09-13T00:03:32.168078930Z" level=info msg="StopPodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\"" Sep 13 00:03:32.169098 containerd[1482]: time="2025-09-13T00:03:32.168305576Z" level=info msg="StopPodSandbox for \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\"" Sep 13 00:03:32.272990 systemd-networkd[1354]: cali69eec54acef: Gained IPv6LL Sep 13 00:03:32.273253 systemd-networkd[1354]: calie094c1e89f0: Gained IPv6LL Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.267 [INFO][4414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.268 [INFO][4414] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" iface="eth0" netns="/var/run/netns/cni-9c1b28e8-eb0d-243c-1bf7-051e3bded208" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.268 [INFO][4414] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" iface="eth0" netns="/var/run/netns/cni-9c1b28e8-eb0d-243c-1bf7-051e3bded208" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.268 [INFO][4414] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" iface="eth0" netns="/var/run/netns/cni-9c1b28e8-eb0d-243c-1bf7-051e3bded208" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.268 [INFO][4414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.268 [INFO][4414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.326 [INFO][4434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.327 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.327 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.350 [WARNING][4434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.350 [INFO][4434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.352 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:32.362039 containerd[1482]: 2025-09-13 00:03:32.359 [INFO][4414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:32.363988 containerd[1482]: time="2025-09-13T00:03:32.363428207Z" level=info msg="TearDown network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\" successfully" Sep 13 00:03:32.364129 containerd[1482]: time="2025-09-13T00:03:32.364096904Z" level=info msg="StopPodSandbox for \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\" returns successfully" Sep 13 00:03:32.366263 containerd[1482]: time="2025-09-13T00:03:32.366204840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6fdv4,Uid:ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d,Namespace:kube-system,Attempt:1,}" Sep 13 00:03:32.366785 systemd[1]: run-netns-cni\x2d9c1b28e8\x2deb0d\x2d243c\x2d1bf7\x2d051e3bded208.mount: Deactivated successfully. Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.288 [INFO][4413] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.288 [INFO][4413] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" iface="eth0" netns="/var/run/netns/cni-11dd5b76-f0b8-0afa-beb7-b6fe53d1fb60" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.290 [INFO][4413] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" iface="eth0" netns="/var/run/netns/cni-11dd5b76-f0b8-0afa-beb7-b6fe53d1fb60" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.297 [INFO][4413] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" iface="eth0" netns="/var/run/netns/cni-11dd5b76-f0b8-0afa-beb7-b6fe53d1fb60" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.297 [INFO][4413] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.297 [INFO][4413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.347 [INFO][4442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.347 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.352 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.372 [WARNING][4442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.372 [INFO][4442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.378 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:32.398623 containerd[1482]: 2025-09-13 00:03:32.385 [INFO][4413] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:32.402523 containerd[1482]: time="2025-09-13T00:03:32.402465910Z" level=info msg="TearDown network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" successfully" Sep 13 00:03:32.402523 containerd[1482]: time="2025-09-13T00:03:32.402507551Z" level=info msg="StopPodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" returns successfully" Sep 13 00:03:32.406232 containerd[1482]: time="2025-09-13T00:03:32.405798077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9bf767ff-9zn6h,Uid:760badfc-d54f-4563-b3b8-27b25aca93e2,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:03:32.411209 systemd[1]: run-netns-cni\x2d11dd5b76\x2df0b8\x2d0afa\x2dbeb7\x2db6fe53d1fb60.mount: Deactivated successfully. Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.288 [INFO][4396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.288 [INFO][4396] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" iface="eth0" netns="/var/run/netns/cni-4fad35dc-a31e-d581-3fa2-f8bdd51d2d65" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.290 [INFO][4396] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" iface="eth0" netns="/var/run/netns/cni-4fad35dc-a31e-d581-3fa2-f8bdd51d2d65" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.292 [INFO][4396] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" iface="eth0" netns="/var/run/netns/cni-4fad35dc-a31e-d581-3fa2-f8bdd51d2d65" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.292 [INFO][4396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.292 [INFO][4396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.359 [INFO][4440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.360 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.381 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.421 [WARNING][4440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.421 [INFO][4440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.424 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:32.444543 containerd[1482]: 2025-09-13 00:03:32.431 [INFO][4396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:32.446337 containerd[1482]: time="2025-09-13T00:03:32.446301218Z" level=info msg="TearDown network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\" successfully" Sep 13 00:03:32.446589 containerd[1482]: time="2025-09-13T00:03:32.446375820Z" level=info msg="StopPodSandbox for \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\" returns successfully" Sep 13 00:03:32.448877 containerd[1482]: time="2025-09-13T00:03:32.448805123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889f7d879-488cx,Uid:59113340-495e-48cd-a556-b5807ae1b886,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.307 [INFO][4409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.308 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" iface="eth0" netns="/var/run/netns/cni-a216363b-c97e-afe6-cdf2-e823eeaf71f2" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.308 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" iface="eth0" netns="/var/run/netns/cni-a216363b-c97e-afe6-cdf2-e823eeaf71f2" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.311 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" iface="eth0" netns="/var/run/netns/cni-a216363b-c97e-afe6-cdf2-e823eeaf71f2" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.311 [INFO][4409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.311 [INFO][4409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.387 [INFO][4450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.387 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.425 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.450 [WARNING][4450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.453 [INFO][4450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.457 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:32.463028 containerd[1482]: 2025-09-13 00:03:32.460 [INFO][4409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:32.464876 containerd[1482]: time="2025-09-13T00:03:32.463978321Z" level=info msg="TearDown network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\" successfully" Sep 13 00:03:32.464876 containerd[1482]: time="2025-09-13T00:03:32.464072083Z" level=info msg="StopPodSandbox for \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\" returns successfully" Sep 13 00:03:32.465486 containerd[1482]: time="2025-09-13T00:03:32.465414918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5559ff8dc6-m5kcg,Uid:5bc73e8e-3e70-4fad-b2fe-fc395c348d63,Namespace:calico-system,Attempt:1,}" Sep 13 00:03:32.652451 systemd-networkd[1354]: cali3e99bff266b: Link UP Sep 13 00:03:32.653727 systemd-networkd[1354]: cali3e99bff266b: Gained carrier Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.505 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0 coredns-7c65d6cfc9- kube-system ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d 962 0 2025-09-13 00:02:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e coredns-7c65d6cfc9-6fdv4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e99bff266b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6fdv4" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.505 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6fdv4" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.584 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" HandleID="k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.584 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" HandleID="k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"coredns-7c65d6cfc9-6fdv4", "timestamp":"2025-09-13 00:03:32.583968864 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.584 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.584 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.584 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.598 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.604 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.611 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.615 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.622 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.622 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.624 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2 Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.632 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.642 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.132/26] block=192.168.28.128/26 handle="k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.643 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.132/26] handle="k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.643 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:32.678786 containerd[1482]: 2025-09-13 00:03:32.643 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.132/26] IPv6=[] ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" HandleID="k8s-pod-network.5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.680125 containerd[1482]: 2025-09-13 00:03:32.646 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6fdv4" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"coredns-7c65d6cfc9-6fdv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e99bff266b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:32.680125 containerd[1482]: 2025-09-13 00:03:32.646 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.132/32] ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6fdv4" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.680125 containerd[1482]: 2025-09-13 00:03:32.646 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e99bff266b ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6fdv4" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.680125 containerd[1482]: 2025-09-13 00:03:32.654 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6fdv4" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.680125 containerd[1482]: 2025-09-13 00:03:32.658 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6fdv4" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2", Pod:"coredns-7c65d6cfc9-6fdv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e99bff266b", MAC:"3e:a2:a2:56:74:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:32.680125 containerd[1482]: 2025-09-13 00:03:32.674 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6fdv4" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:32.708693 containerd[1482]: time="2025-09-13T00:03:32.708531127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:32.708693 containerd[1482]: time="2025-09-13T00:03:32.708622449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:32.708693 containerd[1482]: time="2025-09-13T00:03:32.708638050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:32.710397 containerd[1482]: time="2025-09-13T00:03:32.709968524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:32.740353 systemd[1]: Started cri-containerd-5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2.scope - libcontainer container 5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2. Sep 13 00:03:32.773036 systemd-networkd[1354]: cali8a627638229: Link UP Sep 13 00:03:32.773819 systemd-networkd[1354]: cali8a627638229: Gained carrier Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.543 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0 calico-apiserver-5889f7d879- calico-apiserver 59113340-495e-48cd-a556-b5807ae1b886 963 0 2025-09-13 00:03:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5889f7d879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e calico-apiserver-5889f7d879-488cx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a627638229 [] [] }} ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-488cx" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.544 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-488cx" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.616 [INFO][4517] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" HandleID="k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.616 [INFO][4517] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" HandleID="k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"calico-apiserver-5889f7d879-488cx", "timestamp":"2025-09-13 00:03:32.616352552 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.619 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.643 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.643 [INFO][4517] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.698 [INFO][4517] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.705 [INFO][4517] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.712 [INFO][4517] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.716 [INFO][4517] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.721 [INFO][4517] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.721 [INFO][4517] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.725 [INFO][4517] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.732 [INFO][4517] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.745 [INFO][4517] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.133/26] block=192.168.28.128/26 handle="k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.745 [INFO][4517] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.133/26] handle="k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.746 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:32.795446 containerd[1482]: 2025-09-13 00:03:32.746 [INFO][4517] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.133/26] IPv6=[] ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" HandleID="k8s-pod-network.55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.796067 containerd[1482]: 2025-09-13 00:03:32.754 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-488cx" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0", GenerateName:"calico-apiserver-5889f7d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"59113340-495e-48cd-a556-b5807ae1b886", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889f7d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"calico-apiserver-5889f7d879-488cx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a627638229", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:32.796067 containerd[1482]: 2025-09-13 00:03:32.755 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.133/32] ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-488cx" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.796067 containerd[1482]: 2025-09-13 00:03:32.755 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a627638229 ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-488cx" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.796067 containerd[1482]: 2025-09-13 00:03:32.773 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-488cx" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.796067 containerd[1482]: 2025-09-13 00:03:32.774 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-488cx" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0", GenerateName:"calico-apiserver-5889f7d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"59113340-495e-48cd-a556-b5807ae1b886", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889f7d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c", Pod:"calico-apiserver-5889f7d879-488cx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a627638229", MAC:"4a:c0:5d:a4:ba:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:32.796067 containerd[1482]: 2025-09-13 00:03:32.790 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-488cx" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:32.818693 containerd[1482]: time="2025-09-13T00:03:32.818568249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6fdv4,Uid:ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d,Namespace:kube-system,Attempt:1,} returns sandbox id \"5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2\"" Sep 13 00:03:32.826842 containerd[1482]: time="2025-09-13T00:03:32.826782904Z" level=info msg="CreateContainer within sandbox \"5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:03:32.846365 containerd[1482]: time="2025-09-13T00:03:32.844984541Z" level=info msg="CreateContainer within sandbox \"5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8c939f129985b5bde28b9afea1b3f408d6655150d3e981a57eb6fd1860ce533\"" Sep 13 00:03:32.847162 containerd[1482]: time="2025-09-13T00:03:32.847037235Z" level=info msg="StartContainer for \"a8c939f129985b5bde28b9afea1b3f408d6655150d3e981a57eb6fd1860ce533\"" Sep 13 00:03:32.851208 containerd[1482]: time="2025-09-13T00:03:32.850705011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:32.851208 containerd[1482]: time="2025-09-13T00:03:32.850764532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:32.851208 containerd[1482]: time="2025-09-13T00:03:32.850792093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:32.851208 containerd[1482]: time="2025-09-13T00:03:32.850877695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:32.884514 systemd[1]: Started cri-containerd-55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c.scope - libcontainer container 55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c. Sep 13 00:03:32.889921 systemd-networkd[1354]: cali4595eb21a4b: Link UP Sep 13 00:03:32.894289 systemd-networkd[1354]: cali4595eb21a4b: Gained carrier Sep 13 00:03:32.920087 systemd[1]: Started cri-containerd-a8c939f129985b5bde28b9afea1b3f408d6655150d3e981a57eb6fd1860ce533.scope - libcontainer container a8c939f129985b5bde28b9afea1b3f408d6655150d3e981a57eb6fd1860ce533. Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.559 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0 calico-apiserver-f9bf767ff- calico-apiserver 760badfc-d54f-4563-b3b8-27b25aca93e2 964 0 2025-09-13 00:03:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f9bf767ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e calico-apiserver-f9bf767ff-9zn6h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4595eb21a4b [] [] }} ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-9zn6h" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.559 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-9zn6h" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.639 [INFO][4523] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.639 [INFO][4523] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032d7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"calico-apiserver-f9bf767ff-9zn6h", "timestamp":"2025-09-13 00:03:32.639448157 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.639 [INFO][4523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.746 [INFO][4523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.747 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.805 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.819 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.830 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.835 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.839 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.839 [INFO][4523] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.844 [INFO][4523] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8 Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.855 [INFO][4523] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.867 [INFO][4523] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.134/26] block=192.168.28.128/26 handle="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.867 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.134/26] handle="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.867 [INFO][4523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:32.932964 containerd[1482]: 2025-09-13 00:03:32.868 [INFO][4523] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.134/26] IPv6=[] ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.934222 containerd[1482]: 2025-09-13 00:03:32.877 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-9zn6h" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0", GenerateName:"calico-apiserver-f9bf767ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"760badfc-d54f-4563-b3b8-27b25aca93e2", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9bf767ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"calico-apiserver-f9bf767ff-9zn6h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4595eb21a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:32.934222 containerd[1482]: 2025-09-13 00:03:32.877 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.134/32] ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-9zn6h" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.934222 containerd[1482]: 2025-09-13 00:03:32.877 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4595eb21a4b ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-9zn6h" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.934222 containerd[1482]: 2025-09-13 00:03:32.896 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-9zn6h" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.934222 containerd[1482]: 2025-09-13 00:03:32.901 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-9zn6h" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0", GenerateName:"calico-apiserver-f9bf767ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"760badfc-d54f-4563-b3b8-27b25aca93e2", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9bf767ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8", Pod:"calico-apiserver-f9bf767ff-9zn6h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4595eb21a4b", MAC:"8e:0a:02:0d:da:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:32.934222 containerd[1482]: 2025-09-13 00:03:32.928 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Namespace="calico-apiserver" Pod="calico-apiserver-f9bf767ff-9zn6h" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:32.983507 containerd[1482]: time="2025-09-13T00:03:32.983452408Z" level=info msg="StartContainer for \"a8c939f129985b5bde28b9afea1b3f408d6655150d3e981a57eb6fd1860ce533\" returns successfully" Sep 13 00:03:32.998571 containerd[1482]: time="2025-09-13T00:03:32.998003029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:33.000176 containerd[1482]: time="2025-09-13T00:03:32.999027696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:33.000176 containerd[1482]: time="2025-09-13T00:03:32.999781356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:33.000176 containerd[1482]: time="2025-09-13T00:03:32.999910039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:33.003185 systemd-networkd[1354]: cali46bd433cb9b: Link UP Sep 13 00:03:33.007128 systemd-networkd[1354]: cali46bd433cb9b: Gained carrier Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.590 [INFO][4497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0 calico-kube-controllers-5559ff8dc6- calico-system 5bc73e8e-3e70-4fad-b2fe-fc395c348d63 965 0 2025-09-13 00:03:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5559ff8dc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e calico-kube-controllers-5559ff8dc6-m5kcg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali46bd433cb9b [] [] }} ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Namespace="calico-system" Pod="calico-kube-controllers-5559ff8dc6-m5kcg" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.591 [INFO][4497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Namespace="calico-system" Pod="calico-kube-controllers-5559ff8dc6-m5kcg" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.660 [INFO][4533] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" HandleID="k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.660 [INFO][4533] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" HandleID="k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d39c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"calico-kube-controllers-5559ff8dc6-m5kcg", "timestamp":"2025-09-13 00:03:32.660134339 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.661 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.868 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.868 [INFO][4533] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.901 [INFO][4533] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.922 [INFO][4533] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.936 [INFO][4533] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.939 [INFO][4533] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.943 [INFO][4533] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.944 [INFO][4533] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.949 [INFO][4533] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876 Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.961 [INFO][4533] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.979 [INFO][4533] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.135/26] block=192.168.28.128/26 handle="k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.979 [INFO][4533] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.135/26] handle="k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.979 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:33.041560 containerd[1482]: 2025-09-13 00:03:32.979 [INFO][4533] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.135/26] IPv6=[] ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" HandleID="k8s-pod-network.5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:33.043356 containerd[1482]: 2025-09-13 00:03:32.987 [INFO][4497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Namespace="calico-system" Pod="calico-kube-controllers-5559ff8dc6-m5kcg" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0", GenerateName:"calico-kube-controllers-5559ff8dc6-", Namespace:"calico-system", SelfLink:"", UID:"5bc73e8e-3e70-4fad-b2fe-fc395c348d63", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5559ff8dc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"calico-kube-controllers-5559ff8dc6-m5kcg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46bd433cb9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:33.043356 containerd[1482]: 2025-09-13 00:03:32.988 [INFO][4497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.135/32] ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Namespace="calico-system" Pod="calico-kube-controllers-5559ff8dc6-m5kcg" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:33.043356 containerd[1482]: 2025-09-13 00:03:32.988 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46bd433cb9b ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Namespace="calico-system" Pod="calico-kube-controllers-5559ff8dc6-m5kcg" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:33.043356 containerd[1482]: 2025-09-13 00:03:33.008 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Namespace="calico-system" Pod="calico-kube-controllers-5559ff8dc6-m5kcg" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:33.043356 containerd[1482]: 2025-09-13 00:03:33.012 [INFO][4497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Namespace="calico-system" Pod="calico-kube-controllers-5559ff8dc6-m5kcg" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0", GenerateName:"calico-kube-controllers-5559ff8dc6-", Namespace:"calico-system", SelfLink:"", UID:"5bc73e8e-3e70-4fad-b2fe-fc395c348d63", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5559ff8dc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876", Pod:"calico-kube-controllers-5559ff8dc6-m5kcg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46bd433cb9b", MAC:"8a:98:2f:d6:2e:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:33.043356 containerd[1482]: 2025-09-13 00:03:33.035 [INFO][4497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876" Namespace="calico-system" Pod="calico-kube-controllers-5559ff8dc6-m5kcg" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:33.058232 systemd[1]: Started cri-containerd-3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8.scope - libcontainer container 3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8. Sep 13 00:03:33.066147 containerd[1482]: time="2025-09-13T00:03:33.066011849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889f7d879-488cx,Uid:59113340-495e-48cd-a556-b5807ae1b886,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c\"" Sep 13 00:03:33.081784 containerd[1482]: time="2025-09-13T00:03:33.081036993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:33.082119 containerd[1482]: time="2025-09-13T00:03:33.081749651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:33.082119 containerd[1482]: time="2025-09-13T00:03:33.081778052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:33.082119 containerd[1482]: time="2025-09-13T00:03:33.081961816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:33.108075 systemd[1]: Started cri-containerd-5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876.scope - libcontainer container 5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876. Sep 13 00:03:33.158330 containerd[1482]: time="2025-09-13T00:03:33.158240046Z" level=info msg="StopPodSandbox for \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\"" Sep 13 00:03:33.165812 containerd[1482]: time="2025-09-13T00:03:33.165631394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5559ff8dc6-m5kcg,Uid:5bc73e8e-3e70-4fad-b2fe-fc395c348d63,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876\"" Sep 13 00:03:33.183755 containerd[1482]: time="2025-09-13T00:03:33.183453930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f9bf767ff-9zn6h,Uid:760badfc-d54f-4563-b3b8-27b25aca93e2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\"" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.239 [INFO][4786] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.240 [INFO][4786] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" iface="eth0" netns="/var/run/netns/cni-a1e57ab9-7e29-4c4f-1adf-d0829d5e422e" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.241 [INFO][4786] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" iface="eth0" netns="/var/run/netns/cni-a1e57ab9-7e29-4c4f-1adf-d0829d5e422e" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.241 [INFO][4786] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" iface="eth0" netns="/var/run/netns/cni-a1e57ab9-7e29-4c4f-1adf-d0829d5e422e" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.241 [INFO][4786] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.241 [INFO][4786] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.262 [INFO][4794] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.262 [INFO][4794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.262 [INFO][4794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.276 [WARNING][4794] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.276 [INFO][4794] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.278 [INFO][4794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:33.282422 containerd[1482]: 2025-09-13 00:03:33.280 [INFO][4786] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:33.283988 containerd[1482]: time="2025-09-13T00:03:33.283107876Z" level=info msg="TearDown network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\" successfully" Sep 13 00:03:33.283988 containerd[1482]: time="2025-09-13T00:03:33.283143517Z" level=info msg="StopPodSandbox for \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\" returns successfully" Sep 13 00:03:33.283988 containerd[1482]: time="2025-09-13T00:03:33.283770213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54r9g,Uid:96d6c8b4-b0e0-4570-9741-27188bdbb60e,Namespace:calico-system,Attempt:1,}" Sep 13 00:03:33.377388 systemd[1]: run-netns-cni\x2da1e57ab9\x2d7e29\x2d4c4f\x2d1adf\x2dd0829d5e422e.mount: Deactivated successfully. Sep 13 00:03:33.377486 systemd[1]: run-netns-cni\x2d4fad35dc\x2da31e\x2dd581\x2d3fa2\x2df8bdd51d2d65.mount: Deactivated successfully. Sep 13 00:03:33.377533 systemd[1]: run-netns-cni\x2da216363b\x2dc97e\x2dafe6\x2dcdf2\x2de823eeaf71f2.mount: Deactivated successfully. Sep 13 00:03:33.446999 systemd-networkd[1354]: cali4403672b85d: Link UP Sep 13 00:03:33.449669 systemd-networkd[1354]: cali4403672b85d: Gained carrier Sep 13 00:03:33.453879 kubelet[2551]: I0913 00:03:33.453503 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6fdv4" podStartSLOduration=42.45348339 podStartE2EDuration="42.45348339s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:03:33.452789212 +0000 UTC m=+47.459279631" watchObservedRunningTime="2025-09-13 00:03:33.45348339 +0000 UTC m=+47.459973809" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.336 [INFO][4800] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0 csi-node-driver- calico-system 96d6c8b4-b0e0-4570-9741-27188bdbb60e 985 0 2025-09-13 00:03:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e csi-node-driver-54r9g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4403672b85d [] [] }} ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Namespace="calico-system" Pod="csi-node-driver-54r9g" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.336 [INFO][4800] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Namespace="calico-system" Pod="csi-node-driver-54r9g" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.383 [INFO][4812] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" HandleID="k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.383 [INFO][4812] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" HandleID="k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"csi-node-driver-54r9g", "timestamp":"2025-09-13 00:03:33.383112632 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.383 [INFO][4812] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.383 [INFO][4812] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.383 [INFO][4812] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.399 [INFO][4812] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.404 [INFO][4812] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.410 [INFO][4812] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.414 [INFO][4812] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.418 [INFO][4812] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.418 [INFO][4812] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.420 [INFO][4812] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976 Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.426 [INFO][4812] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.437 [INFO][4812] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.136/26] block=192.168.28.128/26 handle="k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.437 [INFO][4812] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.136/26] handle="k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.437 [INFO][4812] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:33.485750 containerd[1482]: 2025-09-13 00:03:33.437 [INFO][4812] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.136/26] IPv6=[] ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" HandleID="k8s-pod-network.4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.488995 containerd[1482]: 2025-09-13 00:03:33.441 [INFO][4800] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Namespace="calico-system" Pod="csi-node-driver-54r9g" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96d6c8b4-b0e0-4570-9741-27188bdbb60e", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"csi-node-driver-54r9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4403672b85d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:33.488995 containerd[1482]: 2025-09-13 00:03:33.441 [INFO][4800] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.136/32] ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Namespace="calico-system" Pod="csi-node-driver-54r9g" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.488995 containerd[1482]: 2025-09-13 00:03:33.441 [INFO][4800] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4403672b85d ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Namespace="calico-system" Pod="csi-node-driver-54r9g" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.488995 containerd[1482]: 2025-09-13 00:03:33.451 [INFO][4800] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Namespace="calico-system" Pod="csi-node-driver-54r9g" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.488995 containerd[1482]: 2025-09-13 00:03:33.454 [INFO][4800] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Namespace="calico-system" Pod="csi-node-driver-54r9g" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96d6c8b4-b0e0-4570-9741-27188bdbb60e", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976", Pod:"csi-node-driver-54r9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4403672b85d", MAC:"0e:0d:d5:e9:bc:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:33.488995 containerd[1482]: 2025-09-13 00:03:33.481 [INFO][4800] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976" Namespace="calico-system" Pod="csi-node-driver-54r9g" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:33.519654 containerd[1482]: time="2025-09-13T00:03:33.519476036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:33.519992 containerd[1482]: time="2025-09-13T00:03:33.519859246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:33.520552 containerd[1482]: time="2025-09-13T00:03:33.520493742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:33.521569 containerd[1482]: time="2025-09-13T00:03:33.520730668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:33.550200 systemd[1]: run-containerd-runc-k8s.io-4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976-runc.ioV8a8.mount: Deactivated successfully. Sep 13 00:03:33.561326 systemd[1]: Started cri-containerd-4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976.scope - libcontainer container 4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976. Sep 13 00:03:33.592408 containerd[1482]: time="2025-09-13T00:03:33.592329497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54r9g,Uid:96d6c8b4-b0e0-4570-9741-27188bdbb60e,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976\"" Sep 13 00:03:34.063112 systemd-networkd[1354]: cali4595eb21a4b: Gained IPv6LL Sep 13 00:03:34.159110 containerd[1482]: time="2025-09-13T00:03:34.158792357Z" level=info msg="StopPodSandbox for \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\"" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.247 [INFO][4887] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.247 [INFO][4887] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" iface="eth0" netns="/var/run/netns/cni-b9a64555-3455-3677-7566-263385368dde" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.247 [INFO][4887] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" iface="eth0" netns="/var/run/netns/cni-b9a64555-3455-3677-7566-263385368dde" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.247 [INFO][4887] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" iface="eth0" netns="/var/run/netns/cni-b9a64555-3455-3677-7566-263385368dde" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.248 [INFO][4887] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.248 [INFO][4887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.303 [INFO][4894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.303 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.303 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.322 [WARNING][4894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.322 [INFO][4894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.326 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:34.333011 containerd[1482]: 2025-09-13 00:03:34.330 [INFO][4887] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:34.338971 containerd[1482]: time="2025-09-13T00:03:34.336811158Z" level=info msg="TearDown network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\" successfully" Sep 13 00:03:34.338971 containerd[1482]: time="2025-09-13T00:03:34.336854439Z" level=info msg="StopPodSandbox for \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\" returns successfully" Sep 13 00:03:34.337530 systemd[1]: run-netns-cni\x2db9a64555\x2d3455\x2d3677\x2d7566\x2d263385368dde.mount: Deactivated successfully. Sep 13 00:03:34.340763 containerd[1482]: time="2025-09-13T00:03:34.340164562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-s5g7w,Uid:8a8ab913-32e4-496a-82e5-7f7a7a768ff5,Namespace:kube-system,Attempt:1,}" Sep 13 00:03:34.559737 systemd-networkd[1354]: cali7c5d8498784: Link UP Sep 13 00:03:34.564058 systemd-networkd[1354]: cali7c5d8498784: Gained carrier Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.433 [INFO][4901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0 coredns-7c65d6cfc9- kube-system 8a8ab913-32e4-496a-82e5-7f7a7a768ff5 1006 0 2025-09-13 00:02:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e coredns-7c65d6cfc9-s5g7w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c5d8498784 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-s5g7w" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.434 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-s5g7w" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.479 [INFO][4912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" HandleID="k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.479 [INFO][4912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" HandleID="k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"coredns-7c65d6cfc9-s5g7w", "timestamp":"2025-09-13 00:03:34.479416516 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.479 [INFO][4912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.479 [INFO][4912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.479 [INFO][4912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.494 [INFO][4912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.502 [INFO][4912] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.510 [INFO][4912] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.513 [INFO][4912] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.518 [INFO][4912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.518 [INFO][4912] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.520 [INFO][4912] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.532 [INFO][4912] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.545 [INFO][4912] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.137/26] block=192.168.28.128/26 handle="k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.545 [INFO][4912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.137/26] handle="k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.545 [INFO][4912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:34.596493 containerd[1482]: 2025-09-13 00:03:34.545 [INFO][4912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.137/26] IPv6=[] ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" HandleID="k8s-pod-network.2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.597386 containerd[1482]: 2025-09-13 00:03:34.548 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-s5g7w" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a8ab913-32e4-496a-82e5-7f7a7a768ff5", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"coredns-7c65d6cfc9-s5g7w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c5d8498784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:34.597386 containerd[1482]: 2025-09-13 00:03:34.549 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.137/32] ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-s5g7w" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.597386 containerd[1482]: 2025-09-13 00:03:34.549 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c5d8498784 ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-s5g7w" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.597386 containerd[1482]: 2025-09-13 00:03:34.568 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-s5g7w" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.597386 containerd[1482]: 2025-09-13 00:03:34.571 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-s5g7w" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a8ab913-32e4-496a-82e5-7f7a7a768ff5", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f", Pod:"coredns-7c65d6cfc9-s5g7w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c5d8498784", MAC:"12:63:e1:4b:db:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:34.597386 containerd[1482]: 2025-09-13 00:03:34.590 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-s5g7w" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:34.634728 containerd[1482]: time="2025-09-13T00:03:34.633206473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:34.634728 containerd[1482]: time="2025-09-13T00:03:34.633675445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:34.634728 containerd[1482]: time="2025-09-13T00:03:34.633709046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:34.634728 containerd[1482]: time="2025-09-13T00:03:34.633816809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:34.640121 systemd-networkd[1354]: cali3e99bff266b: Gained IPv6LL Sep 13 00:03:34.640383 systemd-networkd[1354]: cali8a627638229: Gained IPv6LL Sep 13 00:03:34.688591 systemd[1]: run-containerd-runc-k8s.io-2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f-runc.sDjMxW.mount: Deactivated successfully. Sep 13 00:03:34.699104 systemd[1]: Started cri-containerd-2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f.scope - libcontainer container 2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f. Sep 13 00:03:34.771389 containerd[1482]: time="2025-09-13T00:03:34.771293759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-s5g7w,Uid:8a8ab913-32e4-496a-82e5-7f7a7a768ff5,Namespace:kube-system,Attempt:1,} returns sandbox id \"2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f\"" Sep 13 00:03:34.779300 containerd[1482]: time="2025-09-13T00:03:34.779257557Z" level=info msg="CreateContainer within sandbox \"2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:03:34.810720 containerd[1482]: time="2025-09-13T00:03:34.810553298Z" level=info msg="CreateContainer within sandbox \"2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5f1ac35a312af235d2a1542ff936142d641e0924dc1297b1b451336104869a7\"" Sep 13 00:03:34.813219 containerd[1482]: time="2025-09-13T00:03:34.812985439Z" level=info msg="StartContainer for \"f5f1ac35a312af235d2a1542ff936142d641e0924dc1297b1b451336104869a7\"" Sep 13 00:03:34.863096 systemd[1]: Started cri-containerd-f5f1ac35a312af235d2a1542ff936142d641e0924dc1297b1b451336104869a7.scope - libcontainer container f5f1ac35a312af235d2a1542ff936142d641e0924dc1297b1b451336104869a7. Sep 13 00:03:34.928057 containerd[1482]: time="2025-09-13T00:03:34.927317131Z" level=info msg="StartContainer for \"f5f1ac35a312af235d2a1542ff936142d641e0924dc1297b1b451336104869a7\" returns successfully" Sep 13 00:03:34.959991 systemd-networkd[1354]: cali46bd433cb9b: Gained IPv6LL Sep 13 00:03:35.076934 containerd[1482]: time="2025-09-13T00:03:35.076187643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:35.078648 containerd[1482]: time="2025-09-13T00:03:35.077861484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 00:03:35.078648 containerd[1482]: time="2025-09-13T00:03:35.078576341Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:35.081200 containerd[1482]: time="2025-09-13T00:03:35.081163045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:35.083257 containerd[1482]: time="2025-09-13T00:03:35.083158453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 4.39287828s" Sep 13 00:03:35.083257 containerd[1482]: time="2025-09-13T00:03:35.083205254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:03:35.086527 containerd[1482]: time="2025-09-13T00:03:35.086099925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:03:35.088608 containerd[1482]: time="2025-09-13T00:03:35.088535864Z" level=info msg="CreateContainer within sandbox \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:03:35.128986 containerd[1482]: time="2025-09-13T00:03:35.128839567Z" level=info msg="CreateContainer within sandbox \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\"" Sep 13 00:03:35.129713 containerd[1482]: time="2025-09-13T00:03:35.129614226Z" level=info msg="StartContainer for \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\"" Sep 13 00:03:35.175431 systemd[1]: Started cri-containerd-8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c.scope - libcontainer container 8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c. Sep 13 00:03:35.215117 systemd-networkd[1354]: cali4403672b85d: Gained IPv6LL Sep 13 00:03:35.276180 containerd[1482]: time="2025-09-13T00:03:35.276050997Z" level=info msg="StartContainer for \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\" returns successfully" Sep 13 00:03:35.376496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216189764.mount: Deactivated successfully. Sep 13 00:03:35.526362 kubelet[2551]: I0913 00:03:35.526293 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-s5g7w" podStartSLOduration=44.526273899 podStartE2EDuration="44.526273899s" podCreationTimestamp="2025-09-13 00:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:03:35.508120176 +0000 UTC m=+49.514610595" watchObservedRunningTime="2025-09-13 00:03:35.526273899 +0000 UTC m=+49.532764278" Sep 13 00:03:35.548092 kubelet[2551]: I0913 00:03:35.548016 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f9bf767ff-65nq2" podStartSLOduration=28.152880852 podStartE2EDuration="32.547985828s" podCreationTimestamp="2025-09-13 00:03:03 +0000 UTC" firstStartedPulling="2025-09-13 00:03:30.689702357 +0000 UTC m=+44.696192736" lastFinishedPulling="2025-09-13 00:03:35.084807293 +0000 UTC m=+49.091297712" observedRunningTime="2025-09-13 00:03:35.547053285 +0000 UTC m=+49.553543704" watchObservedRunningTime="2025-09-13 00:03:35.547985828 +0000 UTC m=+49.554476247" Sep 13 00:03:36.623090 systemd-networkd[1354]: cali7c5d8498784: Gained IPv6LL Sep 13 00:03:37.938854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166650018.mount: Deactivated successfully. Sep 13 00:03:38.649908 containerd[1482]: time="2025-09-13T00:03:38.649841227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:38.651861 containerd[1482]: time="2025-09-13T00:03:38.651805032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 00:03:38.653709 containerd[1482]: time="2025-09-13T00:03:38.653569952Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:38.657926 containerd[1482]: time="2025-09-13T00:03:38.657839410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:38.658845 containerd[1482]: time="2025-09-13T00:03:38.658704590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.572566263s" Sep 13 00:03:38.658845 containerd[1482]: time="2025-09-13T00:03:38.658740230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:03:38.661847 containerd[1482]: time="2025-09-13T00:03:38.661808621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:03:38.663350 containerd[1482]: time="2025-09-13T00:03:38.663319215Z" level=info msg="CreateContainer within sandbox \"3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:03:38.688222 containerd[1482]: time="2025-09-13T00:03:38.688167544Z" level=info msg="CreateContainer within sandbox \"3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4\"" Sep 13 00:03:38.688909 containerd[1482]: time="2025-09-13T00:03:38.688867920Z" level=info msg="StartContainer for \"761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4\"" Sep 13 00:03:38.772117 systemd[1]: Started cri-containerd-761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4.scope - libcontainer container 761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4. Sep 13 00:03:38.887179 containerd[1482]: time="2025-09-13T00:03:38.886695889Z" level=info msg="StartContainer for \"761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4\" returns successfully" Sep 13 00:03:39.079809 containerd[1482]: time="2025-09-13T00:03:39.079725074Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:39.083198 containerd[1482]: time="2025-09-13T00:03:39.082425215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:03:39.086250 containerd[1482]: time="2025-09-13T00:03:39.086191740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 424.341518ms" Sep 13 00:03:39.086250 containerd[1482]: time="2025-09-13T00:03:39.086246741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:03:39.088380 containerd[1482]: time="2025-09-13T00:03:39.088010141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:03:39.090911 containerd[1482]: time="2025-09-13T00:03:39.090616319Z" level=info msg="CreateContainer within sandbox \"55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:03:39.114985 containerd[1482]: time="2025-09-13T00:03:39.114942185Z" level=info msg="CreateContainer within sandbox \"55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09902710357c346c8441f57ac515e1077b9ac5d5458dd3a0edc6798e23b4d14f\"" Sep 13 00:03:39.115679 containerd[1482]: time="2025-09-13T00:03:39.115633241Z" level=info msg="StartContainer for \"09902710357c346c8441f57ac515e1077b9ac5d5458dd3a0edc6798e23b4d14f\"" Sep 13 00:03:39.159585 systemd[1]: Started cri-containerd-09902710357c346c8441f57ac515e1077b9ac5d5458dd3a0edc6798e23b4d14f.scope - libcontainer container 09902710357c346c8441f57ac515e1077b9ac5d5458dd3a0edc6798e23b4d14f. Sep 13 00:03:39.243033 containerd[1482]: time="2025-09-13T00:03:39.242942540Z" level=info msg="StartContainer for \"09902710357c346c8441f57ac515e1077b9ac5d5458dd3a0edc6798e23b4d14f\" returns successfully" Sep 13 00:03:39.562121 kubelet[2551]: I0913 00:03:39.561416 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-qj7nd" podStartSLOduration=22.641773714 podStartE2EDuration="30.561396052s" podCreationTimestamp="2025-09-13 00:03:09 +0000 UTC" firstStartedPulling="2025-09-13 00:03:30.741297582 +0000 UTC m=+44.747788001" lastFinishedPulling="2025-09-13 00:03:38.66091992 +0000 UTC m=+52.667410339" observedRunningTime="2025-09-13 00:03:39.549220299 +0000 UTC m=+53.555710718" watchObservedRunningTime="2025-09-13 00:03:39.561396052 +0000 UTC m=+53.567886471" Sep 13 00:03:40.523795 kubelet[2551]: I0913 00:03:40.523759 2551 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:03:41.527237 kubelet[2551]: I0913 00:03:41.526345 2551 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:03:43.155468 containerd[1482]: time="2025-09-13T00:03:43.155415788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:43.158584 containerd[1482]: time="2025-09-13T00:03:43.156930580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 00:03:43.158584 containerd[1482]: time="2025-09-13T00:03:43.158014322Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:43.173630 containerd[1482]: time="2025-09-13T00:03:43.172538147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:43.175039 containerd[1482]: time="2025-09-13T00:03:43.175001239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.086950097s" Sep 13 00:03:43.175618 containerd[1482]: time="2025-09-13T00:03:43.175588571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:03:43.179857 containerd[1482]: time="2025-09-13T00:03:43.179736498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:03:43.192428 containerd[1482]: time="2025-09-13T00:03:43.192384123Z" level=info msg="CreateContainer within sandbox \"5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:03:43.218523 containerd[1482]: time="2025-09-13T00:03:43.218399949Z" level=info msg="CreateContainer within sandbox \"5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b\"" Sep 13 00:03:43.221817 containerd[1482]: time="2025-09-13T00:03:43.221765899Z" level=info msg="StartContainer for \"278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b\"" Sep 13 00:03:43.272171 systemd[1]: Started cri-containerd-278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b.scope - libcontainer container 278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b. Sep 13 00:03:43.323387 containerd[1482]: time="2025-09-13T00:03:43.323236707Z" level=info msg="StartContainer for \"278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b\" returns successfully" Sep 13 00:03:43.448084 kubelet[2551]: I0913 00:03:43.447794 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5889f7d879-488cx" podStartSLOduration=32.430157658 podStartE2EDuration="38.447678557s" podCreationTimestamp="2025-09-13 00:03:05 +0000 UTC" firstStartedPulling="2025-09-13 00:03:33.06995795 +0000 UTC m=+47.076448369" lastFinishedPulling="2025-09-13 00:03:39.087478849 +0000 UTC m=+53.093969268" observedRunningTime="2025-09-13 00:03:39.611466017 +0000 UTC m=+53.617956436" watchObservedRunningTime="2025-09-13 00:03:43.447678557 +0000 UTC m=+57.454168976" Sep 13 00:03:43.570932 kubelet[2551]: I0913 00:03:43.569792 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5559ff8dc6-m5kcg" podStartSLOduration=24.561050979 podStartE2EDuration="34.569775837s" podCreationTimestamp="2025-09-13 00:03:09 +0000 UTC" firstStartedPulling="2025-09-13 00:03:33.168365744 +0000 UTC m=+47.174856163" lastFinishedPulling="2025-09-13 00:03:43.177090602 +0000 UTC m=+57.183581021" observedRunningTime="2025-09-13 00:03:43.569486071 +0000 UTC m=+57.575976450" watchObservedRunningTime="2025-09-13 00:03:43.569775837 +0000 UTC m=+57.576266256" Sep 13 00:03:43.587544 systemd[1]: Created slice kubepods-besteffort-poda07e934a_2922_47d0_86d3_902eb5caaffa.slice - libcontainer container kubepods-besteffort-poda07e934a_2922_47d0_86d3_902eb5caaffa.slice. Sep 13 00:03:43.601563 containerd[1482]: time="2025-09-13T00:03:43.601502543Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:43.604938 containerd[1482]: time="2025-09-13T00:03:43.603436503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:03:43.611616 containerd[1482]: time="2025-09-13T00:03:43.611567994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 431.785854ms" Sep 13 00:03:43.611616 containerd[1482]: time="2025-09-13T00:03:43.611609955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:03:43.614513 containerd[1482]: time="2025-09-13T00:03:43.614466574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:03:43.615549 containerd[1482]: time="2025-09-13T00:03:43.615505036Z" level=info msg="CreateContainer within sandbox \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:03:43.636355 containerd[1482]: time="2025-09-13T00:03:43.636299432Z" level=info msg="CreateContainer within sandbox \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\"" Sep 13 00:03:43.639024 containerd[1482]: time="2025-09-13T00:03:43.637118970Z" level=info msg="StartContainer for \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\"" Sep 13 00:03:43.680221 kubelet[2551]: I0913 00:03:43.680183 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56sgc\" (UniqueName: \"kubernetes.io/projected/a07e934a-2922-47d0-86d3-902eb5caaffa-kube-api-access-56sgc\") pod \"calico-apiserver-5889f7d879-v4vv8\" (UID: \"a07e934a-2922-47d0-86d3-902eb5caaffa\") " pod="calico-apiserver/calico-apiserver-5889f7d879-v4vv8" Sep 13 00:03:43.680445 kubelet[2551]: I0913 00:03:43.680428 2551 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a07e934a-2922-47d0-86d3-902eb5caaffa-calico-apiserver-certs\") pod \"calico-apiserver-5889f7d879-v4vv8\" (UID: \"a07e934a-2922-47d0-86d3-902eb5caaffa\") " pod="calico-apiserver/calico-apiserver-5889f7d879-v4vv8" Sep 13 00:03:43.683980 systemd[1]: Started cri-containerd-c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e.scope - libcontainer container c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e. Sep 13 00:03:43.743102 containerd[1482]: time="2025-09-13T00:03:43.742764145Z" level=info msg="StartContainer for \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\" returns successfully" Sep 13 00:03:43.894512 containerd[1482]: time="2025-09-13T00:03:43.894461406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889f7d879-v4vv8,Uid:a07e934a-2922-47d0-86d3-902eb5caaffa,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:03:44.119101 systemd-networkd[1354]: cali0841121eced: Link UP Sep 13 00:03:44.119299 systemd-networkd[1354]: cali0841121eced: Gained carrier Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:43.970 [INFO][5329] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0 calico-apiserver-5889f7d879- calico-apiserver a07e934a-2922-47d0-86d3-902eb5caaffa 1099 0 2025-09-13 00:03:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5889f7d879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-d78c7abf5e calico-apiserver-5889f7d879-v4vv8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0841121eced [] [] }} ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-v4vv8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:43.971 [INFO][5329] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-v4vv8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.025 [INFO][5342] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" HandleID="k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.025 [INFO][5342] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" HandleID="k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003302f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-d78c7abf5e", "pod":"calico-apiserver-5889f7d879-v4vv8", "timestamp":"2025-09-13 00:03:44.025277022 +0000 UTC"}, Hostname:"ci-4081-3-5-n-d78c7abf5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.025 [INFO][5342] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.025 [INFO][5342] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.025 [INFO][5342] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-d78c7abf5e' Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.037 [INFO][5342] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.043 [INFO][5342] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.049 [INFO][5342] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.052 [INFO][5342] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.059 [INFO][5342] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.059 [INFO][5342] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.066 [INFO][5342] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.085 [INFO][5342] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.111 [INFO][5342] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.28.138/26] block=192.168.28.128/26 handle="k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.111 [INFO][5342] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.138/26] handle="k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" host="ci-4081-3-5-n-d78c7abf5e" Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.111 [INFO][5342] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:44.147053 containerd[1482]: 2025-09-13 00:03:44.112 [INFO][5342] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.28.138/26] IPv6=[] ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" HandleID="k8s-pod-network.e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" Sep 13 00:03:44.147616 containerd[1482]: 2025-09-13 00:03:44.114 [INFO][5329] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-v4vv8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0", GenerateName:"calico-apiserver-5889f7d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"a07e934a-2922-47d0-86d3-902eb5caaffa", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889f7d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"", Pod:"calico-apiserver-5889f7d879-v4vv8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0841121eced", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:44.147616 containerd[1482]: 2025-09-13 00:03:44.114 [INFO][5329] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.138/32] ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-v4vv8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" Sep 13 00:03:44.147616 containerd[1482]: 2025-09-13 00:03:44.114 [INFO][5329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0841121eced ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-v4vv8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" Sep 13 00:03:44.147616 containerd[1482]: 2025-09-13 00:03:44.116 [INFO][5329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-v4vv8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" Sep 13 00:03:44.147616 containerd[1482]: 2025-09-13 00:03:44.119 [INFO][5329] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-v4vv8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0", GenerateName:"calico-apiserver-5889f7d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"a07e934a-2922-47d0-86d3-902eb5caaffa", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889f7d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d", Pod:"calico-apiserver-5889f7d879-v4vv8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0841121eced", MAC:"42:74:fc:af:b8:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:44.147616 containerd[1482]: 2025-09-13 00:03:44.143 [INFO][5329] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d" Namespace="calico-apiserver" Pod="calico-apiserver-5889f7d879-v4vv8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--v4vv8-eth0" Sep 13 00:03:44.192726 containerd[1482]: time="2025-09-13T00:03:44.191283291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:03:44.192726 containerd[1482]: time="2025-09-13T00:03:44.191345852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:03:44.192726 containerd[1482]: time="2025-09-13T00:03:44.191366572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:44.192726 containerd[1482]: time="2025-09-13T00:03:44.191457294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:03:44.229115 systemd[1]: Started cri-containerd-e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d.scope - libcontainer container e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d. Sep 13 00:03:44.328684 containerd[1482]: time="2025-09-13T00:03:44.328559886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5889f7d879-v4vv8,Uid:a07e934a-2922-47d0-86d3-902eb5caaffa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d\"" Sep 13 00:03:44.332011 containerd[1482]: time="2025-09-13T00:03:44.331859994Z" level=info msg="CreateContainer within sandbox \"e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:03:44.351794 containerd[1482]: time="2025-09-13T00:03:44.351208194Z" level=info msg="CreateContainer within sandbox \"e75b5fb976060edf372c031a195431de11148507d5e6cf1caa345364c5c1730d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cf581511b1a59bc74da5745b7bae78f5a337f2b44767de8155ac6848f9454365\"" Sep 13 00:03:44.353645 containerd[1482]: time="2025-09-13T00:03:44.353076672Z" level=info msg="StartContainer for \"cf581511b1a59bc74da5745b7bae78f5a337f2b44767de8155ac6848f9454365\"" Sep 13 00:03:44.395837 systemd[1]: Started cri-containerd-cf581511b1a59bc74da5745b7bae78f5a337f2b44767de8155ac6848f9454365.scope - libcontainer container cf581511b1a59bc74da5745b7bae78f5a337f2b44767de8155ac6848f9454365. Sep 13 00:03:44.475182 containerd[1482]: time="2025-09-13T00:03:44.474647343Z" level=info msg="StartContainer for \"cf581511b1a59bc74da5745b7bae78f5a337f2b44767de8155ac6848f9454365\" returns successfully" Sep 13 00:03:44.541076 containerd[1482]: time="2025-09-13T00:03:44.540877671Z" level=info msg="StopContainer for \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\" with timeout 30 (s)" Sep 13 00:03:44.543090 containerd[1482]: time="2025-09-13T00:03:44.541592646Z" level=info msg="Stop container \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\" with signal terminated" Sep 13 00:03:44.572110 kubelet[2551]: I0913 00:03:44.572018 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f9bf767ff-9zn6h" podStartSLOduration=31.145733626 podStartE2EDuration="41.571998554s" podCreationTimestamp="2025-09-13 00:03:03 +0000 UTC" firstStartedPulling="2025-09-13 00:03:33.187350069 +0000 UTC m=+47.193840488" lastFinishedPulling="2025-09-13 00:03:43.613614997 +0000 UTC m=+57.620105416" observedRunningTime="2025-09-13 00:03:44.56887413 +0000 UTC m=+58.575364549" watchObservedRunningTime="2025-09-13 00:03:44.571998554 +0000 UTC m=+58.578488933" Sep 13 00:03:44.609803 systemd[1]: cri-containerd-c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e.scope: Deactivated successfully. Sep 13 00:03:44.755918 containerd[1482]: time="2025-09-13T00:03:44.755744949Z" level=info msg="shim disconnected" id=c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e namespace=k8s.io Sep 13 00:03:44.755918 containerd[1482]: time="2025-09-13T00:03:44.755798991Z" level=warning msg="cleaning up after shim disconnected" id=c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e namespace=k8s.io Sep 13 00:03:44.755918 containerd[1482]: time="2025-09-13T00:03:44.755809591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:03:44.795386 containerd[1482]: time="2025-09-13T00:03:44.795323767Z" level=info msg="StopContainer for \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\" returns successfully" Sep 13 00:03:44.797103 containerd[1482]: time="2025-09-13T00:03:44.797080363Z" level=info msg="StopPodSandbox for \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\"" Sep 13 00:03:44.797331 containerd[1482]: time="2025-09-13T00:03:44.797218806Z" level=info msg="Container to stop \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:03:44.861553 systemd[1]: cri-containerd-3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8.scope: Deactivated successfully. Sep 13 00:03:44.880674 kubelet[2551]: I0913 00:03:44.880278 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5889f7d879-v4vv8" podStartSLOduration=1.880256681 podStartE2EDuration="1.880256681s" podCreationTimestamp="2025-09-13 00:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:03:44.588096927 +0000 UTC m=+58.594587346" watchObservedRunningTime="2025-09-13 00:03:44.880256681 +0000 UTC m=+58.886747100" Sep 13 00:03:44.898417 containerd[1482]: time="2025-09-13T00:03:44.898181652Z" level=info msg="shim disconnected" id=3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8 namespace=k8s.io Sep 13 00:03:44.898417 containerd[1482]: time="2025-09-13T00:03:44.898248773Z" level=warning msg="cleaning up after shim disconnected" id=3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8 namespace=k8s.io Sep 13 00:03:44.898417 containerd[1482]: time="2025-09-13T00:03:44.898257733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:03:45.068759 systemd-networkd[1354]: cali4595eb21a4b: Link DOWN Sep 13 00:03:45.069152 systemd-networkd[1354]: cali4595eb21a4b: Lost carrier Sep 13 00:03:45.184020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e-rootfs.mount: Deactivated successfully. Sep 13 00:03:45.184133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8-rootfs.mount: Deactivated successfully. Sep 13 00:03:45.184187 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8-shm.mount: Deactivated successfully. Sep 13 00:03:45.231340 containerd[1482]: time="2025-09-13T00:03:45.230483647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:45.232531 containerd[1482]: time="2025-09-13T00:03:45.232496448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 00:03:45.234816 containerd[1482]: time="2025-09-13T00:03:45.234775495Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:45.242134 containerd[1482]: time="2025-09-13T00:03:45.242092404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:45.242854 containerd[1482]: time="2025-09-13T00:03:45.242817938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.628307803s" Sep 13 00:03:45.242854 containerd[1482]: time="2025-09-13T00:03:45.242852379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:03:45.248354 containerd[1482]: time="2025-09-13T00:03:45.248319850Z" level=info msg="CreateContainer within sandbox \"4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:03:45.301546 containerd[1482]: time="2025-09-13T00:03:45.301494053Z" level=info msg="CreateContainer within sandbox \"4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0d6c73fa7054253e831c7da646cc3b47f9ffd82c06751bfb71c947444a3ab96c\"" Sep 13 00:03:45.303867 containerd[1482]: time="2025-09-13T00:03:45.303768579Z" level=info msg="StartContainer for \"0d6c73fa7054253e831c7da646cc3b47f9ffd82c06751bfb71c947444a3ab96c\"" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.063 [INFO][5531] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.063 [INFO][5531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" iface="eth0" netns="/var/run/netns/cni-47bd7705-a9b7-8a41-4c45-fe5e03d28806" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.064 [INFO][5531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" iface="eth0" netns="/var/run/netns/cni-47bd7705-a9b7-8a41-4c45-fe5e03d28806" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.081 [INFO][5531] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" after=17.19811ms iface="eth0" netns="/var/run/netns/cni-47bd7705-a9b7-8a41-4c45-fe5e03d28806" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.081 [INFO][5531] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.081 [INFO][5531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.157 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.159 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.159 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.291 [INFO][5547] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.291 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.297 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:45.307474 containerd[1482]: 2025-09-13 00:03:45.300 [INFO][5531] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:45.307474 containerd[1482]: time="2025-09-13T00:03:45.307288251Z" level=info msg="TearDown network for sandbox \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\" successfully" Sep 13 00:03:45.307474 containerd[1482]: time="2025-09-13T00:03:45.307321452Z" level=info msg="StopPodSandbox for \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\" returns successfully" Sep 13 00:03:45.312876 systemd[1]: run-netns-cni\x2d47bd7705\x2da9b7\x2d8a41\x2d4c45\x2dfe5e03d28806.mount: Deactivated successfully. Sep 13 00:03:45.318626 containerd[1482]: time="2025-09-13T00:03:45.318085071Z" level=info msg="StopPodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\"" Sep 13 00:03:45.393037 systemd[1]: Started cri-containerd-0d6c73fa7054253e831c7da646cc3b47f9ffd82c06751bfb71c947444a3ab96c.scope - libcontainer container 0d6c73fa7054253e831c7da646cc3b47f9ffd82c06751bfb71c947444a3ab96c. Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.420 [WARNING][5574] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0", GenerateName:"calico-apiserver-f9bf767ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"760badfc-d54f-4563-b3b8-27b25aca93e2", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9bf767ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8", Pod:"calico-apiserver-f9bf767ff-9zn6h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4595eb21a4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.420 [INFO][5574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.420 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" iface="eth0" netns="" Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.421 [INFO][5574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.421 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.458 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.459 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.459 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.473 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.473 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.475 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:45.483469 containerd[1482]: 2025-09-13 00:03:45.480 [INFO][5574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:45.484337 containerd[1482]: time="2025-09-13T00:03:45.483497678Z" level=info msg="TearDown network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" successfully" Sep 13 00:03:45.484337 containerd[1482]: time="2025-09-13T00:03:45.483521279Z" level=info msg="StopPodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" returns successfully" Sep 13 00:03:45.500114 containerd[1482]: time="2025-09-13T00:03:45.500064016Z" level=info msg="StartContainer for \"0d6c73fa7054253e831c7da646cc3b47f9ffd82c06751bfb71c947444a3ab96c\" returns successfully" Sep 13 00:03:45.501192 containerd[1482]: time="2025-09-13T00:03:45.501154358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:03:45.557178 kubelet[2551]: I0913 00:03:45.556443 2551 scope.go:117] "RemoveContainer" containerID="c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e" Sep 13 00:03:45.557178 kubelet[2551]: I0913 00:03:45.556580 2551 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:03:45.559506 containerd[1482]: time="2025-09-13T00:03:45.558533726Z" level=info msg="RemoveContainer for \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\"" Sep 13 00:03:45.565298 containerd[1482]: time="2025-09-13T00:03:45.565263423Z" level=info msg="RemoveContainer for \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\" returns successfully" Sep 13 00:03:45.565739 kubelet[2551]: I0913 00:03:45.565708 2551 scope.go:117] "RemoveContainer" containerID="c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e" Sep 13 00:03:45.566010 containerd[1482]: time="2025-09-13T00:03:45.565974117Z" level=error msg="ContainerStatus for \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\": not found" Sep 13 00:03:45.569474 kubelet[2551]: E0913 00:03:45.569427 2551 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\": not found" containerID="c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e" Sep 13 00:03:45.571223 kubelet[2551]: I0913 00:03:45.571153 2551 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e"} err="failed to get container status \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c499059ef572836647d4c01322c5c651a88cff4205ac97c9041694b54eb28f8e\": not found" Sep 13 00:03:45.600376 kubelet[2551]: I0913 00:03:45.600336 2551 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z82df\" (UniqueName: \"kubernetes.io/projected/760badfc-d54f-4563-b3b8-27b25aca93e2-kube-api-access-z82df\") pod \"760badfc-d54f-4563-b3b8-27b25aca93e2\" (UID: \"760badfc-d54f-4563-b3b8-27b25aca93e2\") " Sep 13 00:03:45.601142 kubelet[2551]: I0913 00:03:45.600390 2551 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/760badfc-d54f-4563-b3b8-27b25aca93e2-calico-apiserver-certs\") pod \"760badfc-d54f-4563-b3b8-27b25aca93e2\" (UID: \"760badfc-d54f-4563-b3b8-27b25aca93e2\") " Sep 13 00:03:45.606157 kubelet[2551]: I0913 00:03:45.606078 2551 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760badfc-d54f-4563-b3b8-27b25aca93e2-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "760badfc-d54f-4563-b3b8-27b25aca93e2" (UID: "760badfc-d54f-4563-b3b8-27b25aca93e2"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:03:45.606456 kubelet[2551]: I0913 00:03:45.606272 2551 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760badfc-d54f-4563-b3b8-27b25aca93e2-kube-api-access-z82df" (OuterVolumeSpecName: "kube-api-access-z82df") pod "760badfc-d54f-4563-b3b8-27b25aca93e2" (UID: "760badfc-d54f-4563-b3b8-27b25aca93e2"). InnerVolumeSpecName "kube-api-access-z82df". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:03:45.701686 kubelet[2551]: I0913 00:03:45.701539 2551 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z82df\" (UniqueName: \"kubernetes.io/projected/760badfc-d54f-4563-b3b8-27b25aca93e2-kube-api-access-z82df\") on node \"ci-4081-3-5-n-d78c7abf5e\" DevicePath \"\"" Sep 13 00:03:45.701686 kubelet[2551]: I0913 00:03:45.701585 2551 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/760badfc-d54f-4563-b3b8-27b25aca93e2-calico-apiserver-certs\") on node \"ci-4081-3-5-n-d78c7abf5e\" DevicePath \"\"" Sep 13 00:03:45.775056 systemd-networkd[1354]: cali0841121eced: Gained IPv6LL Sep 13 00:03:45.861791 systemd[1]: Removed slice kubepods-besteffort-pod760badfc_d54f_4563_b3b8_27b25aca93e2.slice - libcontainer container kubepods-besteffort-pod760badfc_d54f_4563_b3b8_27b25aca93e2.slice. Sep 13 00:03:46.164607 kubelet[2551]: I0913 00:03:46.164112 2551 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760badfc-d54f-4563-b3b8-27b25aca93e2" path="/var/lib/kubelet/pods/760badfc-d54f-4563-b3b8-27b25aca93e2/volumes" Sep 13 00:03:46.173877 containerd[1482]: time="2025-09-13T00:03:46.173839485Z" level=info msg="StopPodSandbox for \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\"" Sep 13 00:03:46.186404 systemd[1]: var-lib-kubelet-pods-760badfc\x2dd54f\x2d4563\x2db3b8\x2d27b25aca93e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz82df.mount: Deactivated successfully. Sep 13 00:03:46.186513 systemd[1]: var-lib-kubelet-pods-760badfc\x2dd54f\x2d4563\x2db3b8\x2d27b25aca93e2-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.221 [WARNING][5629] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2", Pod:"coredns-7c65d6cfc9-6fdv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e99bff266b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.222 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.222 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" iface="eth0" netns="" Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.222 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.222 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.254 [INFO][5636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.254 [INFO][5636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.254 [INFO][5636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.271 [WARNING][5636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.271 [INFO][5636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.273 [INFO][5636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:46.279061 containerd[1482]: 2025-09-13 00:03:46.276 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:46.280003 containerd[1482]: time="2025-09-13T00:03:46.279137399Z" level=info msg="TearDown network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\" successfully" Sep 13 00:03:46.280003 containerd[1482]: time="2025-09-13T00:03:46.279169280Z" level=info msg="StopPodSandbox for \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\" returns successfully" Sep 13 00:03:46.280388 containerd[1482]: time="2025-09-13T00:03:46.280359544Z" level=info msg="RemovePodSandbox for \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\"" Sep 13 00:03:46.283017 containerd[1482]: time="2025-09-13T00:03:46.282973396Z" level=info msg="Forcibly stopping sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\"" Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.347 [WARNING][5651] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ff0975d8-c7ba-42bd-bf6d-aeeda659ab5d", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"5b1178b6bdb344ca450cc4ce2aaff468b73e2bd2223cae9e2786e0dd3fb5f8d2", Pod:"coredns-7c65d6cfc9-6fdv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e99bff266b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.348 [INFO][5651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.348 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" iface="eth0" netns="" Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.348 [INFO][5651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.348 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.379 [INFO][5658] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.379 [INFO][5658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.379 [INFO][5658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.390 [WARNING][5658] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.390 [INFO][5658] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" HandleID="k8s-pod-network.ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--6fdv4-eth0" Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.392 [INFO][5658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:46.394757 containerd[1482]: 2025-09-13 00:03:46.393 [INFO][5651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf" Sep 13 00:03:46.395325 containerd[1482]: time="2025-09-13T00:03:46.394794562Z" level=info msg="TearDown network for sandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\" successfully" Sep 13 00:03:46.400046 containerd[1482]: time="2025-09-13T00:03:46.399976506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:46.400139 containerd[1482]: time="2025-09-13T00:03:46.400085028Z" level=info msg="RemovePodSandbox \"ff853313c374e464030d1ffd270db9bffcc67bd985cfc892fa6f2e39e4d6f3bf\" returns successfully" Sep 13 00:03:46.400712 containerd[1482]: time="2025-09-13T00:03:46.400645919Z" level=info msg="StopPodSandbox for \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\"" Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.484 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0", GenerateName:"calico-apiserver-f9bf767ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"c62008ae-00ba-4b2b-b2d4-a36f45e38687", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9bf767ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771", Pod:"calico-apiserver-f9bf767ff-65nq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69eec54acef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.485 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.485 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" iface="eth0" netns="" Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.485 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.485 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.525 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.525 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.525 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.534 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.534 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.536 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:46.540458 containerd[1482]: 2025-09-13 00:03:46.538 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:46.541024 containerd[1482]: time="2025-09-13T00:03:46.540989858Z" level=info msg="TearDown network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\" successfully" Sep 13 00:03:46.541062 containerd[1482]: time="2025-09-13T00:03:46.541033939Z" level=info msg="StopPodSandbox for \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\" returns successfully" Sep 13 00:03:46.541866 containerd[1482]: time="2025-09-13T00:03:46.541819074Z" level=info msg="RemovePodSandbox for \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\"" Sep 13 00:03:46.541866 containerd[1482]: time="2025-09-13T00:03:46.541857635Z" level=info msg="Forcibly stopping sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\"" Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.588 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0", GenerateName:"calico-apiserver-f9bf767ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"c62008ae-00ba-4b2b-b2d4-a36f45e38687", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f9bf767ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771", Pod:"calico-apiserver-f9bf767ff-65nq2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali69eec54acef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.588 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.588 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" iface="eth0" netns="" Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.588 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.588 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.613 [INFO][5708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.613 [INFO][5708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.613 [INFO][5708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.623 [WARNING][5708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.623 [INFO][5708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" HandleID="k8s-pod-network.d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.625 [INFO][5708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:46.630047 containerd[1482]: 2025-09-13 00:03:46.627 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508" Sep 13 00:03:46.630544 containerd[1482]: time="2025-09-13T00:03:46.630108127Z" level=info msg="TearDown network for sandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\" successfully" Sep 13 00:03:46.643026 containerd[1482]: time="2025-09-13T00:03:46.642958745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:46.643172 containerd[1482]: time="2025-09-13T00:03:46.643077188Z" level=info msg="RemovePodSandbox \"d3ea89deec1160a3d03191b21afc24f3230eb0df3ed361388981263555c18508\" returns successfully" Sep 13 00:03:46.643691 containerd[1482]: time="2025-09-13T00:03:46.643653239Z" level=info msg="StopPodSandbox for \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\"" Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.706 [WARNING][5723] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0", GenerateName:"calico-apiserver-5889f7d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"59113340-495e-48cd-a556-b5807ae1b886", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889f7d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c", Pod:"calico-apiserver-5889f7d879-488cx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a627638229", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.706 [INFO][5723] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.707 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" iface="eth0" netns="" Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.707 [INFO][5723] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.707 [INFO][5723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.741 [INFO][5730] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.742 [INFO][5730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.742 [INFO][5730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.753 [WARNING][5730] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.753 [INFO][5730] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.755 [INFO][5730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:46.764476 containerd[1482]: 2025-09-13 00:03:46.759 [INFO][5723] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:46.764476 containerd[1482]: time="2025-09-13T00:03:46.764373304Z" level=info msg="TearDown network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\" successfully" Sep 13 00:03:46.764476 containerd[1482]: time="2025-09-13T00:03:46.764397464Z" level=info msg="StopPodSandbox for \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\" returns successfully" Sep 13 00:03:46.765377 containerd[1482]: time="2025-09-13T00:03:46.764997516Z" level=info msg="RemovePodSandbox for \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\"" Sep 13 00:03:46.765377 containerd[1482]: time="2025-09-13T00:03:46.765024757Z" level=info msg="Forcibly stopping sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\"" Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.844 [WARNING][5744] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0", GenerateName:"calico-apiserver-5889f7d879-", Namespace:"calico-apiserver", SelfLink:"", UID:"59113340-495e-48cd-a556-b5807ae1b886", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5889f7d879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"55bf50e02fc99f9473f9ac1736424e1804ba5c356d3e31443b5b90f00e43f29c", Pod:"calico-apiserver-5889f7d879-488cx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a627638229", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.844 [INFO][5744] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.844 [INFO][5744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" iface="eth0" netns="" Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.844 [INFO][5744] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.844 [INFO][5744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.874 [INFO][5755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.874 [INFO][5755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.874 [INFO][5755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.887 [WARNING][5755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.887 [INFO][5755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" HandleID="k8s-pod-network.f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--5889f7d879--488cx-eth0" Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.890 [INFO][5755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:46.895107 containerd[1482]: 2025-09-13 00:03:46.892 [INFO][5744] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd" Sep 13 00:03:46.895107 containerd[1482]: time="2025-09-13T00:03:46.894671560Z" level=info msg="TearDown network for sandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\" successfully" Sep 13 00:03:46.902758 containerd[1482]: time="2025-09-13T00:03:46.902341434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:46.902758 containerd[1482]: time="2025-09-13T00:03:46.902417156Z" level=info msg="RemovePodSandbox \"f0d78aa9f43fa56f5d4ad6aecd9072fa969aaeb7d4bf7ba8a5481f43d72d6dcd\" returns successfully" Sep 13 00:03:46.903199 containerd[1482]: time="2025-09-13T00:03:46.903106529Z" level=info msg="StopPodSandbox for \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\"" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.017 [WARNING][5770] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.017 [INFO][5770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.017 [INFO][5770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" iface="eth0" netns="" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.018 [INFO][5770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.018 [INFO][5770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.054 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.055 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.055 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.067 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.067 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.069 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:47.083363 containerd[1482]: 2025-09-13 00:03:47.073 [INFO][5770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:47.083363 containerd[1482]: time="2025-09-13T00:03:47.083238485Z" level=info msg="TearDown network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\" successfully" Sep 13 00:03:47.083363 containerd[1482]: time="2025-09-13T00:03:47.083265166Z" level=info msg="StopPodSandbox for \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\" returns successfully" Sep 13 00:03:47.084083 containerd[1482]: time="2025-09-13T00:03:47.083819417Z" level=info msg="RemovePodSandbox for \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\"" Sep 13 00:03:47.084083 containerd[1482]: time="2025-09-13T00:03:47.083860658Z" level=info msg="Forcibly stopping sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\"" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.160 [WARNING][5790] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.160 [INFO][5790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.160 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" iface="eth0" netns="" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.160 [INFO][5790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.160 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.222 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.222 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.222 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.237 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.237 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" HandleID="k8s-pod-network.ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-whisker--57fbb95f8d--fhz2z-eth0" Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.242 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:47.253545 containerd[1482]: 2025-09-13 00:03:47.246 [INFO][5790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1" Sep 13 00:03:47.253545 containerd[1482]: time="2025-09-13T00:03:47.253397498Z" level=info msg="TearDown network for sandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\" successfully" Sep 13 00:03:47.262586 containerd[1482]: time="2025-09-13T00:03:47.262530719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:47.263052 containerd[1482]: time="2025-09-13T00:03:47.262610721Z" level=info msg="RemovePodSandbox \"ea790e6364d69d4a023ac3823ec36401b3a219ca445fe3c34a1d7f7ea6c7c9d1\" returns successfully" Sep 13 00:03:47.264758 containerd[1482]: time="2025-09-13T00:03:47.264371116Z" level=info msg="StopPodSandbox for \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\"" Sep 13 00:03:47.356334 containerd[1482]: time="2025-09-13T00:03:47.356273337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:47.358155 containerd[1482]: time="2025-09-13T00:03:47.358107133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 00:03:47.359009 containerd[1482]: time="2025-09-13T00:03:47.358887109Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:47.363112 containerd[1482]: time="2025-09-13T00:03:47.363058952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:03:47.364376 containerd[1482]: time="2025-09-13T00:03:47.364244895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.863051697s" Sep 13 00:03:47.364376 containerd[1482]: time="2025-09-13T00:03:47.364294216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:03:47.367241 containerd[1482]: time="2025-09-13T00:03:47.367101232Z" level=info msg="CreateContainer within sandbox \"4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:03:47.396390 containerd[1482]: time="2025-09-13T00:03:47.396328851Z" level=info msg="CreateContainer within sandbox \"4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4e85fdd8d174df03c9f258c9f8d565247dfc66c8eff23cf3102de7359cea2cf9\"" Sep 13 00:03:47.397784 containerd[1482]: time="2025-09-13T00:03:47.397671918Z" level=info msg="StartContainer for \"4e85fdd8d174df03c9f258c9f8d565247dfc66c8eff23cf3102de7359cea2cf9\"" Sep 13 00:03:47.461926 systemd[1]: run-containerd-runc-k8s.io-4e85fdd8d174df03c9f258c9f8d565247dfc66c8eff23cf3102de7359cea2cf9-runc.e2G2uP.mount: Deactivated successfully. Sep 13 00:03:47.473145 systemd[1]: Started cri-containerd-4e85fdd8d174df03c9f258c9f8d565247dfc66c8eff23cf3102de7359cea2cf9.scope - libcontainer container 4e85fdd8d174df03c9f258c9f8d565247dfc66c8eff23cf3102de7359cea2cf9. Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.371 [WARNING][5811] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a8ab913-32e4-496a-82e5-7f7a7a768ff5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f", Pod:"coredns-7c65d6cfc9-s5g7w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c5d8498784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.371 [INFO][5811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.372 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" iface="eth0" netns="" Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.372 [INFO][5811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.372 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.434 [INFO][5818] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.439 [INFO][5818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.439 [INFO][5818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.463 [WARNING][5818] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.465 [INFO][5818] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.468 [INFO][5818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:47.476294 containerd[1482]: 2025-09-13 00:03:47.471 [INFO][5811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:47.477572 containerd[1482]: time="2025-09-13T00:03:47.476448959Z" level=info msg="TearDown network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\" successfully" Sep 13 00:03:47.477572 containerd[1482]: time="2025-09-13T00:03:47.476743445Z" level=info msg="StopPodSandbox for \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\" returns successfully" Sep 13 00:03:47.477880 containerd[1482]: time="2025-09-13T00:03:47.477774025Z" level=info msg="RemovePodSandbox for \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\"" Sep 13 00:03:47.477880 containerd[1482]: time="2025-09-13T00:03:47.477808746Z" level=info msg="Forcibly stopping sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\"" Sep 13 00:03:47.586329 containerd[1482]: time="2025-09-13T00:03:47.586290016Z" level=info msg="StartContainer for \"4e85fdd8d174df03c9f258c9f8d565247dfc66c8eff23cf3102de7359cea2cf9\" returns successfully" Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.529 [WARNING][5851] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8a8ab913-32e4-496a-82e5-7f7a7a768ff5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"2d40e26e5efcb1089af2d625d026b0e72a4a391129b818f819187fc8a2adb51f", Pod:"coredns-7c65d6cfc9-s5g7w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c5d8498784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.529 [INFO][5851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.530 [INFO][5851] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" iface="eth0" netns="" Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.530 [INFO][5851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.530 [INFO][5851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.601 [INFO][5863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.603 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.603 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.623 [WARNING][5863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.623 [INFO][5863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" HandleID="k8s-pod-network.b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-coredns--7c65d6cfc9--s5g7w-eth0" Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.627 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:47.631301 containerd[1482]: 2025-09-13 00:03:47.628 [INFO][5851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b" Sep 13 00:03:47.632108 containerd[1482]: time="2025-09-13T00:03:47.631814959Z" level=info msg="TearDown network for sandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\" successfully" Sep 13 00:03:47.637019 containerd[1482]: time="2025-09-13T00:03:47.636955181Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:47.637306 containerd[1482]: time="2025-09-13T00:03:47.637194345Z" level=info msg="RemovePodSandbox \"b6df49de5c907f45edac3f0276179dddb642a9812e5a96d0a2fa48ddb0e1610b\" returns successfully" Sep 13 00:03:47.638921 containerd[1482]: time="2025-09-13T00:03:47.637926160Z" level=info msg="StopPodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\"" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.740 [WARNING][5887] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.740 [INFO][5887] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.740 [INFO][5887] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" iface="eth0" netns="" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.740 [INFO][5887] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.740 [INFO][5887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.778 [INFO][5933] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.778 [INFO][5933] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.779 [INFO][5933] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.788 [WARNING][5933] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.788 [INFO][5933] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.790 [INFO][5933] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:47.794925 containerd[1482]: 2025-09-13 00:03:47.792 [INFO][5887] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:47.794925 containerd[1482]: time="2025-09-13T00:03:47.794820190Z" level=info msg="TearDown network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" successfully" Sep 13 00:03:47.794925 containerd[1482]: time="2025-09-13T00:03:47.794844870Z" level=info msg="StopPodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" returns successfully" Sep 13 00:03:47.796689 containerd[1482]: time="2025-09-13T00:03:47.795880731Z" level=info msg="RemovePodSandbox for \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\"" Sep 13 00:03:47.796689 containerd[1482]: time="2025-09-13T00:03:47.796201937Z" level=info msg="Forcibly stopping sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\"" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.870 [WARNING][5947] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.870 [INFO][5947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.870 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" iface="eth0" netns="" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.870 [INFO][5947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.870 [INFO][5947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.898 [INFO][5956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.899 [INFO][5956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.899 [INFO][5956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.908 [WARNING][5956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.908 [INFO][5956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" HandleID="k8s-pod-network.ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.912 [INFO][5956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:47.918560 containerd[1482]: 2025-09-13 00:03:47.916 [INFO][5947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8" Sep 13 00:03:47.918560 containerd[1482]: time="2025-09-13T00:03:47.918536242Z" level=info msg="TearDown network for sandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" successfully" Sep 13 00:03:47.926829 containerd[1482]: time="2025-09-13T00:03:47.926781605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:47.926960 containerd[1482]: time="2025-09-13T00:03:47.926861407Z" level=info msg="RemovePodSandbox \"ba0dfd54fa4567f3e371fb4b053d1d0a64d822c2dd978f5033c0640222c5e8f8\" returns successfully" Sep 13 00:03:47.927738 containerd[1482]: time="2025-09-13T00:03:47.927708904Z" level=info msg="StopPodSandbox for \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\"" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:47.988 [WARNING][5970] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:47.989 [INFO][5970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:47.989 [INFO][5970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" iface="eth0" netns="" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:47.989 [INFO][5970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:47.989 [INFO][5970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:48.011 [INFO][5977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:48.011 [INFO][5977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:48.011 [INFO][5977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:48.025 [WARNING][5977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:48.025 [INFO][5977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:48.027 [INFO][5977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:48.033743 containerd[1482]: 2025-09-13 00:03:48.031 [INFO][5970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:48.035852 containerd[1482]: time="2025-09-13T00:03:48.033782718Z" level=info msg="TearDown network for sandbox \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\" successfully" Sep 13 00:03:48.035852 containerd[1482]: time="2025-09-13T00:03:48.033808719Z" level=info msg="StopPodSandbox for \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\" returns successfully" Sep 13 00:03:48.035852 containerd[1482]: time="2025-09-13T00:03:48.034579254Z" level=info msg="RemovePodSandbox for \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\"" Sep 13 00:03:48.035852 containerd[1482]: time="2025-09-13T00:03:48.034611694Z" level=info msg="Forcibly stopping sandbox \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\"" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.096 [WARNING][5993] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.096 [INFO][5993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.096 [INFO][5993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" iface="eth0" netns="" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.096 [INFO][5993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.096 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.118 [INFO][6000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.118 [INFO][6000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.118 [INFO][6000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.133 [WARNING][6000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.133 [INFO][6000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" HandleID="k8s-pod-network.3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--9zn6h-eth0" Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.135 [INFO][6000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:48.139003 containerd[1482]: 2025-09-13 00:03:48.137 [INFO][5993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8" Sep 13 00:03:48.139365 containerd[1482]: time="2025-09-13T00:03:48.139040699Z" level=info msg="TearDown network for sandbox \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\" successfully" Sep 13 00:03:48.157127 containerd[1482]: time="2025-09-13T00:03:48.144938174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:48.157310 containerd[1482]: time="2025-09-13T00:03:48.157171134Z" level=info msg="RemovePodSandbox \"3c5ce807b81aaa9dc36dbf2feece223a2b56bcd4f4ab4e93ee0d8c56059d61c8\" returns successfully" Sep 13 00:03:48.157648 containerd[1482]: time="2025-09-13T00:03:48.157619342Z" level=info msg="StopPodSandbox for \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\"" Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.207 [WARNING][6014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96d6c8b4-b0e0-4570-9741-27188bdbb60e", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976", Pod:"csi-node-driver-54r9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4403672b85d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.208 [INFO][6014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.208 [INFO][6014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" iface="eth0" netns="" Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.208 [INFO][6014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.208 [INFO][6014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.232 [INFO][6021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.232 [INFO][6021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.232 [INFO][6021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.245 [WARNING][6021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.245 [INFO][6021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.247 [INFO][6021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:48.252529 containerd[1482]: 2025-09-13 00:03:48.250 [INFO][6014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:48.252529 containerd[1482]: time="2025-09-13T00:03:48.252492960Z" level=info msg="TearDown network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\" successfully" Sep 13 00:03:48.252529 containerd[1482]: time="2025-09-13T00:03:48.252518120Z" level=info msg="StopPodSandbox for \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\" returns successfully" Sep 13 00:03:48.255731 containerd[1482]: time="2025-09-13T00:03:48.254425878Z" level=info msg="RemovePodSandbox for \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\"" Sep 13 00:03:48.255731 containerd[1482]: time="2025-09-13T00:03:48.254460358Z" level=info msg="Forcibly stopping sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\"" Sep 13 00:03:48.373629 kubelet[2551]: I0913 00:03:48.373584 2551 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:03:48.382421 kubelet[2551]: I0913 00:03:48.382370 2551 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.333 [WARNING][6035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96d6c8b4-b0e0-4570-9741-27188bdbb60e", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"4b9f0ed7b6421abbb6b7406759a3798d7c31ccb34dd344e4fe5bbb1688c95976", Pod:"csi-node-driver-54r9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4403672b85d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.333 [INFO][6035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.334 [INFO][6035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" iface="eth0" netns="" Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.334 [INFO][6035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.334 [INFO][6035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.356 [INFO][6042] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.357 [INFO][6042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.357 [INFO][6042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.426 [WARNING][6042] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.427 [INFO][6042] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" HandleID="k8s-pod-network.52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-csi--node--driver--54r9g-eth0" Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.431 [INFO][6042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:48.437289 containerd[1482]: 2025-09-13 00:03:48.435 [INFO][6035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3" Sep 13 00:03:48.438533 containerd[1482]: time="2025-09-13T00:03:48.437410300Z" level=info msg="TearDown network for sandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\" successfully" Sep 13 00:03:48.444662 containerd[1482]: time="2025-09-13T00:03:48.444151192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:48.444662 containerd[1482]: time="2025-09-13T00:03:48.444611841Z" level=info msg="RemovePodSandbox \"52a91881d4c39b6f2d1a62de1beb7cc61e04795fd934264e2552164a4fbe0ba3\" returns successfully" Sep 13 00:03:48.446546 containerd[1482]: time="2025-09-13T00:03:48.446267913Z" level=info msg="StopPodSandbox for \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\"" Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.513 [WARNING][6056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0", GenerateName:"calico-kube-controllers-5559ff8dc6-", Namespace:"calico-system", SelfLink:"", UID:"5bc73e8e-3e70-4fad-b2fe-fc395c348d63", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5559ff8dc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876", Pod:"calico-kube-controllers-5559ff8dc6-m5kcg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46bd433cb9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.513 [INFO][6056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.513 [INFO][6056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" iface="eth0" netns="" Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.513 [INFO][6056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.513 [INFO][6056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.549 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.550 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.551 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.560 [WARNING][6064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.561 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.563 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:48.567017 containerd[1482]: 2025-09-13 00:03:48.565 [INFO][6056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:48.568060 containerd[1482]: time="2025-09-13T00:03:48.568026097Z" level=info msg="TearDown network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\" successfully" Sep 13 00:03:48.568170 containerd[1482]: time="2025-09-13T00:03:48.568155380Z" level=info msg="StopPodSandbox for \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\" returns successfully" Sep 13 00:03:48.568942 containerd[1482]: time="2025-09-13T00:03:48.568845913Z" level=info msg="RemovePodSandbox for \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\"" Sep 13 00:03:48.569221 containerd[1482]: time="2025-09-13T00:03:48.569179240Z" level=info msg="Forcibly stopping sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\"" Sep 13 00:03:48.653416 kubelet[2551]: I0913 00:03:48.653348 2551 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-54r9g" podStartSLOduration=25.888660962 podStartE2EDuration="39.653329447s" podCreationTimestamp="2025-09-13 00:03:09 +0000 UTC" firstStartedPulling="2025-09-13 00:03:33.600786714 +0000 UTC m=+47.607277173" lastFinishedPulling="2025-09-13 00:03:47.365455239 +0000 UTC m=+61.371945658" observedRunningTime="2025-09-13 00:03:48.646089065 +0000 UTC m=+62.652579484" watchObservedRunningTime="2025-09-13 00:03:48.653329447 +0000 UTC m=+62.659819826" Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.662 [WARNING][6078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0", GenerateName:"calico-kube-controllers-5559ff8dc6-", Namespace:"calico-system", SelfLink:"", UID:"5bc73e8e-3e70-4fad-b2fe-fc395c348d63", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5559ff8dc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"5d76f08e972b7e644d3f7f62242f7a12ac76d487ac9be3ffcfb3e922c600b876", Pod:"calico-kube-controllers-5559ff8dc6-m5kcg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali46bd433cb9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.663 [INFO][6078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.663 [INFO][6078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" iface="eth0" netns="" Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.663 [INFO][6078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.663 [INFO][6078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.693 [INFO][6085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.693 [INFO][6085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.693 [INFO][6085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.705 [WARNING][6085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.705 [INFO][6085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" HandleID="k8s-pod-network.4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--kube--controllers--5559ff8dc6--m5kcg-eth0" Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.709 [INFO][6085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:48.713787 containerd[1482]: 2025-09-13 00:03:48.711 [INFO][6078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042" Sep 13 00:03:48.715649 containerd[1482]: time="2025-09-13T00:03:48.714015075Z" level=info msg="TearDown network for sandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\" successfully" Sep 13 00:03:48.719944 containerd[1482]: time="2025-09-13T00:03:48.719879470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:48.720136 containerd[1482]: time="2025-09-13T00:03:48.720118355Z" level=info msg="RemovePodSandbox \"4fe337311e62d4f264292d801130ef713c79aa61a1bc152f4b9596627b95e042\" returns successfully" Sep 13 00:03:48.720734 containerd[1482]: time="2025-09-13T00:03:48.720668645Z" level=info msg="StopPodSandbox for \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\"" Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.773 [WARNING][6099] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"9ec71789-a324-43ba-b57e-8169dfe5b109", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422", Pod:"goldmane-7988f88666-qj7nd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie094c1e89f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.773 [INFO][6099] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.773 [INFO][6099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" iface="eth0" netns="" Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.773 [INFO][6099] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.773 [INFO][6099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.801 [INFO][6106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.802 [INFO][6106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.802 [INFO][6106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.813 [WARNING][6106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.813 [INFO][6106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.814 [INFO][6106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:48.819947 containerd[1482]: 2025-09-13 00:03:48.816 [INFO][6099] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:48.819947 containerd[1482]: time="2025-09-13T00:03:48.817947150Z" level=info msg="TearDown network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\" successfully" Sep 13 00:03:48.819947 containerd[1482]: time="2025-09-13T00:03:48.817973110Z" level=info msg="StopPodSandbox for \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\" returns successfully" Sep 13 00:03:48.821448 containerd[1482]: time="2025-09-13T00:03:48.820613242Z" level=info msg="RemovePodSandbox for \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\"" Sep 13 00:03:48.821448 containerd[1482]: time="2025-09-13T00:03:48.820655643Z" level=info msg="Forcibly stopping sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\"" Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.874 [WARNING][6120] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"9ec71789-a324-43ba-b57e-8169dfe5b109", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-d78c7abf5e", ContainerID:"3f19e9d31bbb136c4b9f7229b4b0c0947f2369a5bd85682e6e657c0ca238b422", Pod:"goldmane-7988f88666-qj7nd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie094c1e89f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.875 [INFO][6120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.875 [INFO][6120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" iface="eth0" netns="" Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.875 [INFO][6120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.875 [INFO][6120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.908 [INFO][6128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.909 [INFO][6128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.909 [INFO][6128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.920 [WARNING][6128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.920 [INFO][6128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" HandleID="k8s-pod-network.7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-goldmane--7988f88666--qj7nd-eth0" Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.922 [INFO][6128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:03:48.929012 containerd[1482]: 2025-09-13 00:03:48.925 [INFO][6120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99" Sep 13 00:03:48.931848 containerd[1482]: time="2025-09-13T00:03:48.928634437Z" level=info msg="TearDown network for sandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\" successfully" Sep 13 00:03:48.939284 containerd[1482]: time="2025-09-13T00:03:48.938938438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:03:48.939284 containerd[1482]: time="2025-09-13T00:03:48.939113762Z" level=info msg="RemovePodSandbox \"7cfd03c9a604a0ea8dc43809a8e32d4e4424703934efff84e736ff1ed5009c99\" returns successfully" Sep 13 00:04:04.231457 kubelet[2551]: I0913 00:04:04.231207 2551 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:04:04.346007 containerd[1482]: time="2025-09-13T00:04:04.345883176Z" level=info msg="StopContainer for \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\" with timeout 30 (s)" Sep 13 00:04:04.347138 containerd[1482]: time="2025-09-13T00:04:04.346730125Z" level=info msg="Stop container \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\" with signal terminated" Sep 13 00:04:04.456130 systemd[1]: cri-containerd-8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c.scope: Deactivated successfully. Sep 13 00:04:04.456969 systemd[1]: cri-containerd-8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c.scope: Consumed 1.115s CPU time. Sep 13 00:04:04.487288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c-rootfs.mount: Deactivated successfully. Sep 13 00:04:04.490815 containerd[1482]: time="2025-09-13T00:04:04.490611157Z" level=info msg="shim disconnected" id=8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c namespace=k8s.io Sep 13 00:04:04.490815 containerd[1482]: time="2025-09-13T00:04:04.490667356Z" level=warning msg="cleaning up after shim disconnected" id=8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c namespace=k8s.io Sep 13 00:04:04.490815 containerd[1482]: time="2025-09-13T00:04:04.490677036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:04:04.509243 containerd[1482]: time="2025-09-13T00:04:04.509065950Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:04:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:04:04.521975 containerd[1482]: time="2025-09-13T00:04:04.521688861Z" level=info msg="StopContainer for \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\" returns successfully" Sep 13 00:04:04.522601 containerd[1482]: time="2025-09-13T00:04:04.522437851Z" level=info msg="StopPodSandbox for \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\"" Sep 13 00:04:04.522601 containerd[1482]: time="2025-09-13T00:04:04.522513010Z" level=info msg="Container to stop \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:04:04.525971 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771-shm.mount: Deactivated successfully. Sep 13 00:04:04.539202 systemd[1]: cri-containerd-dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771.scope: Deactivated successfully. Sep 13 00:04:04.572333 containerd[1482]: time="2025-09-13T00:04:04.572146145Z" level=info msg="shim disconnected" id=dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771 namespace=k8s.io Sep 13 00:04:04.572333 containerd[1482]: time="2025-09-13T00:04:04.572305423Z" level=warning msg="cleaning up after shim disconnected" id=dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771 namespace=k8s.io Sep 13 00:04:04.572333 containerd[1482]: time="2025-09-13T00:04:04.572334102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:04:04.576774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771-rootfs.mount: Deactivated successfully. Sep 13 00:04:04.643054 kubelet[2551]: I0913 00:04:04.642952 2551 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:04.672679 systemd-networkd[1354]: cali69eec54acef: Link DOWN Sep 13 00:04:04.672694 systemd-networkd[1354]: cali69eec54acef: Lost carrier Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.670 [INFO][6241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.670 [INFO][6241] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" iface="eth0" netns="/var/run/netns/cni-ab48e778-5e86-5e0d-bac6-4a5b126b2eb0" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.670 [INFO][6241] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" iface="eth0" netns="/var/run/netns/cni-ab48e778-5e86-5e0d-bac6-4a5b126b2eb0" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.681 [INFO][6241] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" after=10.628657ms iface="eth0" netns="/var/run/netns/cni-ab48e778-5e86-5e0d-bac6-4a5b126b2eb0" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.681 [INFO][6241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.681 [INFO][6241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.713 [INFO][6248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.714 [INFO][6248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.714 [INFO][6248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.759 [INFO][6248] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.759 [INFO][6248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.761 [INFO][6248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:04:04.765325 containerd[1482]: 2025-09-13 00:04:04.763 [INFO][6241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:04.769252 containerd[1482]: time="2025-09-13T00:04:04.769026227Z" level=info msg="TearDown network for sandbox \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\" successfully" Sep 13 00:04:04.769252 containerd[1482]: time="2025-09-13T00:04:04.769060147Z" level=info msg="StopPodSandbox for \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\" returns successfully" Sep 13 00:04:04.771984 systemd[1]: run-netns-cni\x2dab48e778\x2d5e86\x2d5e0d\x2dbac6\x2d4a5b126b2eb0.mount: Deactivated successfully. Sep 13 00:04:04.947708 kubelet[2551]: I0913 00:04:04.947642 2551 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs2tn\" (UniqueName: \"kubernetes.io/projected/c62008ae-00ba-4b2b-b2d4-a36f45e38687-kube-api-access-xs2tn\") pod \"c62008ae-00ba-4b2b-b2d4-a36f45e38687\" (UID: \"c62008ae-00ba-4b2b-b2d4-a36f45e38687\") " Sep 13 00:04:04.947708 kubelet[2551]: I0913 00:04:04.947713 2551 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c62008ae-00ba-4b2b-b2d4-a36f45e38687-calico-apiserver-certs\") pod \"c62008ae-00ba-4b2b-b2d4-a36f45e38687\" (UID: \"c62008ae-00ba-4b2b-b2d4-a36f45e38687\") " Sep 13 00:04:04.954637 kubelet[2551]: I0913 00:04:04.953562 2551 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c62008ae-00ba-4b2b-b2d4-a36f45e38687-kube-api-access-xs2tn" (OuterVolumeSpecName: "kube-api-access-xs2tn") pod "c62008ae-00ba-4b2b-b2d4-a36f45e38687" (UID: "c62008ae-00ba-4b2b-b2d4-a36f45e38687"). InnerVolumeSpecName "kube-api-access-xs2tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:04:04.954752 systemd[1]: var-lib-kubelet-pods-c62008ae\x2d00ba\x2d4b2b\x2db2d4\x2da36f45e38687-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxs2tn.mount: Deactivated successfully. Sep 13 00:04:04.956540 kubelet[2551]: I0913 00:04:04.956488 2551 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c62008ae-00ba-4b2b-b2d4-a36f45e38687-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c62008ae-00ba-4b2b-b2d4-a36f45e38687" (UID: "c62008ae-00ba-4b2b-b2d4-a36f45e38687"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:04:05.050009 kubelet[2551]: I0913 00:04:05.049057 2551 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs2tn\" (UniqueName: \"kubernetes.io/projected/c62008ae-00ba-4b2b-b2d4-a36f45e38687-kube-api-access-xs2tn\") on node \"ci-4081-3-5-n-d78c7abf5e\" DevicePath \"\"" Sep 13 00:04:05.050009 kubelet[2551]: I0913 00:04:05.049096 2551 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c62008ae-00ba-4b2b-b2d4-a36f45e38687-calico-apiserver-certs\") on node \"ci-4081-3-5-n-d78c7abf5e\" DevicePath \"\"" Sep 13 00:04:05.487282 systemd[1]: var-lib-kubelet-pods-c62008ae\x2d00ba\x2d4b2b\x2db2d4\x2da36f45e38687-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 13 00:04:05.652843 systemd[1]: Removed slice kubepods-besteffort-podc62008ae_00ba_4b2b_b2d4_a36f45e38687.slice - libcontainer container kubepods-besteffort-podc62008ae_00ba_4b2b_b2d4_a36f45e38687.slice. Sep 13 00:04:05.652955 systemd[1]: kubepods-besteffort-podc62008ae_00ba_4b2b_b2d4_a36f45e38687.slice: Consumed 1.133s CPU time. Sep 13 00:04:06.163762 kubelet[2551]: I0913 00:04:06.163085 2551 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c62008ae-00ba-4b2b-b2d4-a36f45e38687" path="/var/lib/kubelet/pods/c62008ae-00ba-4b2b-b2d4-a36f45e38687/volumes" Sep 13 00:04:17.663880 systemd[1]: run-containerd-runc-k8s.io-278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b-runc.dI2NNK.mount: Deactivated successfully. Sep 13 00:04:17.701689 systemd[1]: run-containerd-runc-k8s.io-761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4-runc.CZMbXF.mount: Deactivated successfully. Sep 13 00:04:48.942440 kubelet[2551]: I0913 00:04:48.942365 2551 scope.go:117] "RemoveContainer" containerID="8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c" Sep 13 00:04:48.945885 containerd[1482]: time="2025-09-13T00:04:48.945192961Z" level=info msg="RemoveContainer for \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\"" Sep 13 00:04:48.949927 containerd[1482]: time="2025-09-13T00:04:48.949854308Z" level=info msg="RemoveContainer for \"8d7540b2b4c480f365c1dae70a36f64204d0c71c232c52646e1b99d74c7b6c4c\" returns successfully" Sep 13 00:04:48.951394 containerd[1482]: time="2025-09-13T00:04:48.951363916Z" level=info msg="StopPodSandbox for \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\"" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:48.989 [WARNING][6431] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:48.989 [INFO][6431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:48.989 [INFO][6431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" iface="eth0" netns="" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:48.989 [INFO][6431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:48.989 [INFO][6431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:49.009 [INFO][6438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:49.009 [INFO][6438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:49.009 [INFO][6438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:49.022 [WARNING][6438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:49.022 [INFO][6438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:49.024 [INFO][6438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:04:49.027284 containerd[1482]: 2025-09-13 00:04:49.025 [INFO][6431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:49.027871 containerd[1482]: time="2025-09-13T00:04:49.027332479Z" level=info msg="TearDown network for sandbox \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\" successfully" Sep 13 00:04:49.027871 containerd[1482]: time="2025-09-13T00:04:49.027360080Z" level=info msg="StopPodSandbox for \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\" returns successfully" Sep 13 00:04:49.028456 containerd[1482]: time="2025-09-13T00:04:49.028085924Z" level=info msg="RemovePodSandbox for \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\"" Sep 13 00:04:49.028456 containerd[1482]: time="2025-09-13T00:04:49.028115124Z" level=info msg="Forcibly stopping sandbox \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\"" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.065 [WARNING][6453] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" WorkloadEndpoint="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.065 [INFO][6453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.065 [INFO][6453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" iface="eth0" netns="" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.065 [INFO][6453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.066 [INFO][6453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.088 [INFO][6460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.088 [INFO][6460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.088 [INFO][6460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.100 [WARNING][6460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.100 [INFO][6460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" HandleID="k8s-pod-network.dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Workload="ci--4081--3--5--n--d78c7abf5e-k8s-calico--apiserver--f9bf767ff--65nq2-eth0" Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.103 [INFO][6460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:04:49.107380 containerd[1482]: 2025-09-13 00:04:49.105 [INFO][6453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771" Sep 13 00:04:49.108262 containerd[1482]: time="2025-09-13T00:04:49.107438357Z" level=info msg="TearDown network for sandbox \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\" successfully" Sep 13 00:04:49.113847 containerd[1482]: time="2025-09-13T00:04:49.113793075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:04:49.113999 containerd[1482]: time="2025-09-13T00:04:49.113881476Z" level=info msg="RemovePodSandbox \"dff27a30920db751b0b508bc816606bf1fa1f44776fd6f0aded1d5a63769c771\" returns successfully" Sep 13 00:05:22.949425 systemd[1]: Started sshd@7-188.245.230.74:22-147.75.109.163:47696.service - OpenSSH per-connection server daemon (147.75.109.163:47696). Sep 13 00:05:23.939831 sshd[6594]: Accepted publickey for core from 147.75.109.163 port 47696 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:23.942190 sshd[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:23.949551 systemd-logind[1459]: New session 8 of user core. Sep 13 00:05:23.953082 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:05:24.741698 sshd[6594]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:24.745038 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:05:24.745635 systemd[1]: sshd@7-188.245.230.74:22-147.75.109.163:47696.service: Deactivated successfully. Sep 13 00:05:24.748172 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:05:24.750472 systemd-logind[1459]: Removed session 8. Sep 13 00:05:29.923218 systemd[1]: Started sshd@8-188.245.230.74:22-147.75.109.163:47700.service - OpenSSH per-connection server daemon (147.75.109.163:47700). Sep 13 00:05:30.906329 sshd[6611]: Accepted publickey for core from 147.75.109.163 port 47700 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:30.907563 sshd[6611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:30.912881 systemd-logind[1459]: New session 9 of user core. Sep 13 00:05:30.919329 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:05:31.679577 sshd[6611]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:31.684566 systemd[1]: sshd@8-188.245.230.74:22-147.75.109.163:47700.service: Deactivated successfully. Sep 13 00:05:31.687063 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:05:31.687943 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:05:31.690348 systemd-logind[1459]: Removed session 9. Sep 13 00:05:36.857250 systemd[1]: Started sshd@9-188.245.230.74:22-147.75.109.163:33960.service - OpenSSH per-connection server daemon (147.75.109.163:33960). Sep 13 00:05:37.836928 sshd[6648]: Accepted publickey for core from 147.75.109.163 port 33960 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:37.838852 sshd[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:37.843527 systemd-logind[1459]: New session 10 of user core. Sep 13 00:05:37.851213 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:05:38.595831 sshd[6648]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:38.601080 systemd[1]: sshd@9-188.245.230.74:22-147.75.109.163:33960.service: Deactivated successfully. Sep 13 00:05:38.601155 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:05:38.605351 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:05:38.606644 systemd-logind[1459]: Removed session 10. Sep 13 00:05:43.774484 systemd[1]: Started sshd@10-188.245.230.74:22-147.75.109.163:57778.service - OpenSSH per-connection server daemon (147.75.109.163:57778). Sep 13 00:05:44.755651 sshd[6662]: Accepted publickey for core from 147.75.109.163 port 57778 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:44.757833 sshd[6662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:44.762919 systemd-logind[1459]: New session 11 of user core. Sep 13 00:05:44.770317 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:05:45.512350 sshd[6662]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:45.517987 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:05:45.519331 systemd[1]: sshd@10-188.245.230.74:22-147.75.109.163:57778.service: Deactivated successfully. Sep 13 00:05:45.521467 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:05:45.524004 systemd-logind[1459]: Removed session 11. Sep 13 00:05:45.821367 systemd[1]: Started sshd@11-188.245.230.74:22-195.62.49.203:36394.service - OpenSSH per-connection server daemon (195.62.49.203:36394). Sep 13 00:05:46.007184 sshd[6675]: Received disconnect from 195.62.49.203 port 36394:11: Bye Bye [preauth] Sep 13 00:05:46.007184 sshd[6675]: Disconnected from authenticating user root 195.62.49.203 port 36394 [preauth] Sep 13 00:05:46.010806 systemd[1]: sshd@11-188.245.230.74:22-195.62.49.203:36394.service: Deactivated successfully. Sep 13 00:05:47.702463 systemd[1]: run-containerd-runc-k8s.io-761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4-runc.FYtnIC.mount: Deactivated successfully. Sep 13 00:05:50.685356 systemd[1]: Started sshd@12-188.245.230.74:22-147.75.109.163:51736.service - OpenSSH per-connection server daemon (147.75.109.163:51736). Sep 13 00:05:51.678798 sshd[6722]: Accepted publickey for core from 147.75.109.163 port 51736 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:51.681633 sshd[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:51.688205 systemd-logind[1459]: New session 12 of user core. Sep 13 00:05:51.698287 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:05:52.431350 sshd[6722]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:52.438003 systemd[1]: sshd@12-188.245.230.74:22-147.75.109.163:51736.service: Deactivated successfully. Sep 13 00:05:52.443314 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:05:52.444115 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:05:52.446508 systemd-logind[1459]: Removed session 12. Sep 13 00:05:57.601357 systemd[1]: Started sshd@13-188.245.230.74:22-147.75.109.163:51742.service - OpenSSH per-connection server daemon (147.75.109.163:51742). Sep 13 00:05:58.593696 sshd[6737]: Accepted publickey for core from 147.75.109.163 port 51742 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:05:58.595517 sshd[6737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:05:58.601979 systemd-logind[1459]: New session 13 of user core. Sep 13 00:05:58.609264 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:05:59.351604 sshd[6737]: pam_unix(sshd:session): session closed for user core Sep 13 00:05:59.357509 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:05:59.357843 systemd[1]: sshd@13-188.245.230.74:22-147.75.109.163:51742.service: Deactivated successfully. Sep 13 00:05:59.360260 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:05:59.361507 systemd-logind[1459]: Removed session 13. Sep 13 00:06:04.530449 systemd[1]: Started sshd@14-188.245.230.74:22-147.75.109.163:49140.service - OpenSSH per-connection server daemon (147.75.109.163:49140). Sep 13 00:06:05.525316 sshd[6772]: Accepted publickey for core from 147.75.109.163 port 49140 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:05.527145 sshd[6772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:05.534033 systemd-logind[1459]: New session 14 of user core. Sep 13 00:06:05.539118 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:06:06.292435 sshd[6772]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:06.298731 systemd[1]: sshd@14-188.245.230.74:22-147.75.109.163:49140.service: Deactivated successfully. Sep 13 00:06:06.299002 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:06:06.301981 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:06:06.303607 systemd-logind[1459]: Removed session 14. Sep 13 00:06:08.872748 systemd[1]: run-containerd-runc-k8s.io-761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4-runc.ty8oIF.mount: Deactivated successfully. Sep 13 00:06:11.473446 systemd[1]: Started sshd@15-188.245.230.74:22-147.75.109.163:47550.service - OpenSSH per-connection server daemon (147.75.109.163:47550). Sep 13 00:06:12.456748 sshd[6814]: Accepted publickey for core from 147.75.109.163 port 47550 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:12.458470 sshd[6814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:12.463710 systemd-logind[1459]: New session 15 of user core. Sep 13 00:06:12.471239 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:06:13.214056 sshd[6814]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:13.219354 systemd[1]: sshd@15-188.245.230.74:22-147.75.109.163:47550.service: Deactivated successfully. Sep 13 00:06:13.223876 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:06:13.226879 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:06:13.228205 systemd-logind[1459]: Removed session 15. Sep 13 00:06:18.395183 systemd[1]: Started sshd@16-188.245.230.74:22-147.75.109.163:47566.service - OpenSSH per-connection server daemon (147.75.109.163:47566). Sep 13 00:06:19.387461 sshd[6866]: Accepted publickey for core from 147.75.109.163 port 47566 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:19.389990 sshd[6866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:19.395769 systemd-logind[1459]: New session 16 of user core. Sep 13 00:06:19.401182 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:06:20.152786 sshd[6866]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:20.158691 systemd[1]: sshd@16-188.245.230.74:22-147.75.109.163:47566.service: Deactivated successfully. Sep 13 00:06:20.161738 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:06:20.163236 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:06:20.164772 systemd-logind[1459]: Removed session 16. Sep 13 00:06:25.330223 systemd[1]: Started sshd@17-188.245.230.74:22-147.75.109.163:41692.service - OpenSSH per-connection server daemon (147.75.109.163:41692). Sep 13 00:06:26.321128 sshd[6903]: Accepted publickey for core from 147.75.109.163 port 41692 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:26.323158 sshd[6903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:26.328921 systemd-logind[1459]: New session 17 of user core. Sep 13 00:06:26.333099 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:06:27.114515 sshd[6903]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:27.121061 systemd[1]: sshd@17-188.245.230.74:22-147.75.109.163:41692.service: Deactivated successfully. Sep 13 00:06:27.124265 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:06:27.126303 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:06:27.127725 systemd-logind[1459]: Removed session 17. Sep 13 00:06:32.296423 systemd[1]: Started sshd@18-188.245.230.74:22-147.75.109.163:44504.service - OpenSSH per-connection server daemon (147.75.109.163:44504). Sep 13 00:06:33.285330 sshd[6944]: Accepted publickey for core from 147.75.109.163 port 44504 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:33.287946 sshd[6944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:33.294459 systemd-logind[1459]: New session 18 of user core. Sep 13 00:06:33.299199 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:06:34.047426 sshd[6944]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:34.052184 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:06:34.052429 systemd[1]: sshd@18-188.245.230.74:22-147.75.109.163:44504.service: Deactivated successfully. Sep 13 00:06:34.054249 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:06:34.056129 systemd-logind[1459]: Removed session 18. Sep 13 00:06:39.221340 systemd[1]: Started sshd@19-188.245.230.74:22-147.75.109.163:44514.service - OpenSSH per-connection server daemon (147.75.109.163:44514). Sep 13 00:06:40.200969 sshd[6958]: Accepted publickey for core from 147.75.109.163 port 44514 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:40.203220 sshd[6958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:40.208964 systemd-logind[1459]: New session 19 of user core. Sep 13 00:06:40.217172 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:06:40.955937 sshd[6958]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:40.962122 systemd[1]: sshd@19-188.245.230.74:22-147.75.109.163:44514.service: Deactivated successfully. Sep 13 00:06:40.964626 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:06:40.965480 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:06:40.966769 systemd-logind[1459]: Removed session 19. Sep 13 00:06:46.152403 systemd[1]: Started sshd@20-188.245.230.74:22-147.75.109.163:41864.service - OpenSSH per-connection server daemon (147.75.109.163:41864). Sep 13 00:06:47.203953 sshd[6987]: Accepted publickey for core from 147.75.109.163 port 41864 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:47.205881 sshd[6987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:47.210701 systemd-logind[1459]: New session 20 of user core. Sep 13 00:06:47.218451 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:06:47.668001 systemd[1]: run-containerd-runc-k8s.io-278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b-runc.3ub16f.mount: Deactivated successfully. Sep 13 00:06:47.703240 systemd[1]: run-containerd-runc-k8s.io-761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4-runc.0fP0wS.mount: Deactivated successfully. Sep 13 00:06:48.004931 sshd[6987]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:48.011323 systemd[1]: sshd@20-188.245.230.74:22-147.75.109.163:41864.service: Deactivated successfully. Sep 13 00:06:48.011623 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:06:48.014819 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:06:48.016717 systemd-logind[1459]: Removed session 20. Sep 13 00:06:53.179566 systemd[1]: Started sshd@21-188.245.230.74:22-147.75.109.163:46446.service - OpenSSH per-connection server daemon (147.75.109.163:46446). Sep 13 00:06:54.148363 sshd[7043]: Accepted publickey for core from 147.75.109.163 port 46446 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:06:54.150507 sshd[7043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:06:54.155056 systemd-logind[1459]: New session 21 of user core. Sep 13 00:06:54.161083 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:06:54.893940 sshd[7043]: pam_unix(sshd:session): session closed for user core Sep 13 00:06:54.897870 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:06:54.899448 systemd[1]: sshd@21-188.245.230.74:22-147.75.109.163:46446.service: Deactivated successfully. Sep 13 00:06:54.902210 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:06:54.904608 systemd-logind[1459]: Removed session 21. Sep 13 00:07:00.075267 systemd[1]: Started sshd@22-188.245.230.74:22-147.75.109.163:46456.service - OpenSSH per-connection server daemon (147.75.109.163:46456). Sep 13 00:07:01.078586 sshd[7058]: Accepted publickey for core from 147.75.109.163 port 46456 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:01.079838 sshd[7058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:01.090887 systemd-logind[1459]: New session 22 of user core. Sep 13 00:07:01.093197 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:07:01.857643 sshd[7058]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:01.862593 systemd[1]: sshd@22-188.245.230.74:22-147.75.109.163:46456.service: Deactivated successfully. Sep 13 00:07:01.865768 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:07:01.868407 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:07:01.870569 systemd-logind[1459]: Removed session 22. Sep 13 00:07:07.045459 systemd[1]: Started sshd@23-188.245.230.74:22-147.75.109.163:35036.service - OpenSSH per-connection server daemon (147.75.109.163:35036). Sep 13 00:07:08.092072 sshd[7097]: Accepted publickey for core from 147.75.109.163 port 35036 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:08.094653 sshd[7097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:08.100553 systemd-logind[1459]: New session 23 of user core. Sep 13 00:07:08.108245 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:07:08.898168 sshd[7097]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:08.903121 systemd[1]: sshd@23-188.245.230.74:22-147.75.109.163:35036.service: Deactivated successfully. Sep 13 00:07:08.906641 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:07:08.909722 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:07:08.911317 systemd-logind[1459]: Removed session 23. Sep 13 00:07:14.070442 systemd[1]: Started sshd@24-188.245.230.74:22-147.75.109.163:44974.service - OpenSSH per-connection server daemon (147.75.109.163:44974). Sep 13 00:07:15.036890 sshd[7131]: Accepted publickey for core from 147.75.109.163 port 44974 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:15.039373 sshd[7131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:15.046079 systemd-logind[1459]: New session 24 of user core. Sep 13 00:07:15.055230 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:07:15.778580 sshd[7131]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:15.784579 systemd[1]: sshd@24-188.245.230.74:22-147.75.109.163:44974.service: Deactivated successfully. Sep 13 00:07:15.788263 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:07:15.789464 systemd-logind[1459]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:07:15.791137 systemd-logind[1459]: Removed session 24. Sep 13 00:07:20.595728 systemd[1]: run-containerd-runc-k8s.io-278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b-runc.fosMBW.mount: Deactivated successfully. Sep 13 00:07:20.955366 systemd[1]: Started sshd@25-188.245.230.74:22-147.75.109.163:39692.service - OpenSSH per-connection server daemon (147.75.109.163:39692). Sep 13 00:07:21.932786 sshd[7201]: Accepted publickey for core from 147.75.109.163 port 39692 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:21.934832 sshd[7201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:21.939558 systemd-logind[1459]: New session 25 of user core. Sep 13 00:07:21.946369 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:07:22.696374 sshd[7201]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:22.701118 systemd-logind[1459]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:07:22.701467 systemd[1]: sshd@25-188.245.230.74:22-147.75.109.163:39692.service: Deactivated successfully. Sep 13 00:07:22.704718 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:07:22.706489 systemd-logind[1459]: Removed session 25. Sep 13 00:07:27.876389 systemd[1]: Started sshd@26-188.245.230.74:22-147.75.109.163:39708.service - OpenSSH per-connection server daemon (147.75.109.163:39708). Sep 13 00:07:28.859172 sshd[7217]: Accepted publickey for core from 147.75.109.163 port 39708 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:28.861075 sshd[7217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:28.866488 systemd-logind[1459]: New session 26 of user core. Sep 13 00:07:28.871251 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:07:29.624127 sshd[7217]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:29.629737 systemd[1]: sshd@26-188.245.230.74:22-147.75.109.163:39708.service: Deactivated successfully. Sep 13 00:07:29.632586 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:07:29.633866 systemd-logind[1459]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:07:29.637104 systemd-logind[1459]: Removed session 26. Sep 13 00:07:31.485048 systemd[1]: run-containerd-runc-k8s.io-83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0-runc.6kpHxA.mount: Deactivated successfully. Sep 13 00:07:34.803487 systemd[1]: Started sshd@27-188.245.230.74:22-147.75.109.163:59314.service - OpenSSH per-connection server daemon (147.75.109.163:59314). Sep 13 00:07:35.788658 sshd[7253]: Accepted publickey for core from 147.75.109.163 port 59314 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:35.791189 sshd[7253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:35.797796 systemd-logind[1459]: New session 27 of user core. Sep 13 00:07:35.804093 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:07:36.554830 sshd[7253]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:36.560109 systemd[1]: sshd@27-188.245.230.74:22-147.75.109.163:59314.service: Deactivated successfully. Sep 13 00:07:36.563861 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:07:36.566244 systemd-logind[1459]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:07:36.567707 systemd-logind[1459]: Removed session 27. Sep 13 00:07:41.733454 systemd[1]: Started sshd@28-188.245.230.74:22-147.75.109.163:40500.service - OpenSSH per-connection server daemon (147.75.109.163:40500). Sep 13 00:07:42.713673 sshd[7267]: Accepted publickey for core from 147.75.109.163 port 40500 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:42.716159 sshd[7267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:42.721538 systemd-logind[1459]: New session 28 of user core. Sep 13 00:07:42.731816 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:07:43.469481 sshd[7267]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:43.474653 systemd[1]: sshd@28-188.245.230.74:22-147.75.109.163:40500.service: Deactivated successfully. Sep 13 00:07:43.477603 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:07:43.480434 systemd-logind[1459]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:07:43.481832 systemd-logind[1459]: Removed session 28. Sep 13 00:07:48.644342 systemd[1]: Started sshd@29-188.245.230.74:22-147.75.109.163:40506.service - OpenSSH per-connection server daemon (147.75.109.163:40506). Sep 13 00:07:49.627704 sshd[7324]: Accepted publickey for core from 147.75.109.163 port 40506 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:49.630465 sshd[7324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:49.636513 systemd-logind[1459]: New session 29 of user core. Sep 13 00:07:49.644328 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 00:07:50.381580 sshd[7324]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:50.386540 systemd[1]: sshd@29-188.245.230.74:22-147.75.109.163:40506.service: Deactivated successfully. Sep 13 00:07:50.388776 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:07:50.389998 systemd-logind[1459]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:07:50.391155 systemd-logind[1459]: Removed session 29. Sep 13 00:07:55.562598 systemd[1]: Started sshd@30-188.245.230.74:22-147.75.109.163:34314.service - OpenSSH per-connection server daemon (147.75.109.163:34314). Sep 13 00:07:56.558197 sshd[7340]: Accepted publickey for core from 147.75.109.163 port 34314 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:07:56.560432 sshd[7340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:07:56.564871 systemd-logind[1459]: New session 30 of user core. Sep 13 00:07:56.572243 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 13 00:07:57.322554 sshd[7340]: pam_unix(sshd:session): session closed for user core Sep 13 00:07:57.327178 systemd[1]: sshd@30-188.245.230.74:22-147.75.109.163:34314.service: Deactivated successfully. Sep 13 00:07:57.330288 systemd[1]: session-30.scope: Deactivated successfully. Sep 13 00:07:57.331366 systemd-logind[1459]: Session 30 logged out. Waiting for processes to exit. Sep 13 00:07:57.332652 systemd-logind[1459]: Removed session 30. Sep 13 00:08:02.510176 systemd[1]: Started sshd@31-188.245.230.74:22-147.75.109.163:59586.service - OpenSSH per-connection server daemon (147.75.109.163:59586). Sep 13 00:08:03.489801 sshd[7375]: Accepted publickey for core from 147.75.109.163 port 59586 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:03.494134 sshd[7375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:03.503643 systemd-logind[1459]: New session 31 of user core. Sep 13 00:08:03.509609 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 13 00:08:04.250222 sshd[7375]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:04.254485 systemd-logind[1459]: Session 31 logged out. Waiting for processes to exit. Sep 13 00:08:04.254837 systemd[1]: sshd@31-188.245.230.74:22-147.75.109.163:59586.service: Deactivated successfully. Sep 13 00:08:04.257556 systemd[1]: session-31.scope: Deactivated successfully. Sep 13 00:08:04.260076 systemd-logind[1459]: Removed session 31. Sep 13 00:08:09.424286 systemd[1]: Started sshd@32-188.245.230.74:22-147.75.109.163:59594.service - OpenSSH per-connection server daemon (147.75.109.163:59594). Sep 13 00:08:10.395675 sshd[7408]: Accepted publickey for core from 147.75.109.163 port 59594 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:10.397522 sshd[7408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:10.402071 systemd-logind[1459]: New session 32 of user core. Sep 13 00:08:10.409147 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 13 00:08:11.155955 sshd[7408]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:11.160601 systemd[1]: sshd@32-188.245.230.74:22-147.75.109.163:59594.service: Deactivated successfully. Sep 13 00:08:11.164123 systemd[1]: session-32.scope: Deactivated successfully. Sep 13 00:08:11.165165 systemd-logind[1459]: Session 32 logged out. Waiting for processes to exit. Sep 13 00:08:11.166360 systemd-logind[1459]: Removed session 32. Sep 13 00:08:16.334171 systemd[1]: Started sshd@33-188.245.230.74:22-147.75.109.163:52026.service - OpenSSH per-connection server daemon (147.75.109.163:52026). Sep 13 00:08:17.308911 sshd[7429]: Accepted publickey for core from 147.75.109.163 port 52026 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:17.311031 sshd[7429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:17.315546 systemd-logind[1459]: New session 33 of user core. Sep 13 00:08:17.320151 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 13 00:08:18.061061 sshd[7429]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:18.067085 systemd[1]: sshd@33-188.245.230.74:22-147.75.109.163:52026.service: Deactivated successfully. Sep 13 00:08:18.069851 systemd[1]: session-33.scope: Deactivated successfully. Sep 13 00:08:18.071507 systemd-logind[1459]: Session 33 logged out. Waiting for processes to exit. Sep 13 00:08:18.073101 systemd-logind[1459]: Removed session 33. Sep 13 00:08:23.235191 systemd[1]: Started sshd@34-188.245.230.74:22-147.75.109.163:59988.service - OpenSSH per-connection server daemon (147.75.109.163:59988). Sep 13 00:08:24.220890 sshd[7517]: Accepted publickey for core from 147.75.109.163 port 59988 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:24.223076 sshd[7517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:24.229671 systemd-logind[1459]: New session 34 of user core. Sep 13 00:08:24.235225 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 13 00:08:24.966949 sshd[7517]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:24.971728 systemd[1]: sshd@34-188.245.230.74:22-147.75.109.163:59988.service: Deactivated successfully. Sep 13 00:08:24.974030 systemd[1]: session-34.scope: Deactivated successfully. Sep 13 00:08:24.976169 systemd-logind[1459]: Session 34 logged out. Waiting for processes to exit. Sep 13 00:08:24.977509 systemd-logind[1459]: Removed session 34. Sep 13 00:08:30.139862 systemd[1]: Started sshd@35-188.245.230.74:22-147.75.109.163:60076.service - OpenSSH per-connection server daemon (147.75.109.163:60076). Sep 13 00:08:31.136509 sshd[7531]: Accepted publickey for core from 147.75.109.163 port 60076 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:31.138073 sshd[7531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:31.142595 systemd-logind[1459]: New session 35 of user core. Sep 13 00:08:31.150126 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 13 00:08:31.478723 systemd[1]: run-containerd-runc-k8s.io-83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0-runc.mtGt97.mount: Deactivated successfully. Sep 13 00:08:31.912437 sshd[7531]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:31.916498 systemd[1]: sshd@35-188.245.230.74:22-147.75.109.163:60076.service: Deactivated successfully. Sep 13 00:08:31.919116 systemd[1]: session-35.scope: Deactivated successfully. Sep 13 00:08:31.921457 systemd-logind[1459]: Session 35 logged out. Waiting for processes to exit. Sep 13 00:08:31.923062 systemd-logind[1459]: Removed session 35. Sep 13 00:08:37.087285 systemd[1]: Started sshd@36-188.245.230.74:22-147.75.109.163:60092.service - OpenSSH per-connection server daemon (147.75.109.163:60092). Sep 13 00:08:38.080286 sshd[7565]: Accepted publickey for core from 147.75.109.163 port 60092 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:38.082397 sshd[7565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:38.089536 systemd-logind[1459]: New session 36 of user core. Sep 13 00:08:38.095151 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 13 00:08:38.842084 sshd[7565]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:38.848231 systemd[1]: sshd@36-188.245.230.74:22-147.75.109.163:60092.service: Deactivated successfully. Sep 13 00:08:38.850500 systemd[1]: session-36.scope: Deactivated successfully. Sep 13 00:08:38.851782 systemd-logind[1459]: Session 36 logged out. Waiting for processes to exit. Sep 13 00:08:38.853227 systemd-logind[1459]: Removed session 36. Sep 13 00:08:44.022450 systemd[1]: Started sshd@37-188.245.230.74:22-147.75.109.163:38684.service - OpenSSH per-connection server daemon (147.75.109.163:38684). Sep 13 00:08:45.002350 sshd[7579]: Accepted publickey for core from 147.75.109.163 port 38684 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:45.005055 sshd[7579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:45.010864 systemd-logind[1459]: New session 37 of user core. Sep 13 00:08:45.018304 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 13 00:08:45.758340 sshd[7579]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:45.763493 systemd[1]: sshd@37-188.245.230.74:22-147.75.109.163:38684.service: Deactivated successfully. Sep 13 00:08:45.765864 systemd[1]: session-37.scope: Deactivated successfully. Sep 13 00:08:45.767047 systemd-logind[1459]: Session 37 logged out. Waiting for processes to exit. Sep 13 00:08:45.768233 systemd-logind[1459]: Removed session 37. Sep 13 00:08:50.928333 systemd[1]: Started sshd@38-188.245.230.74:22-147.75.109.163:50726.service - OpenSSH per-connection server daemon (147.75.109.163:50726). Sep 13 00:08:51.900644 sshd[7639]: Accepted publickey for core from 147.75.109.163 port 50726 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:51.902607 sshd[7639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:51.909749 systemd-logind[1459]: New session 38 of user core. Sep 13 00:08:51.914097 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 13 00:08:52.651448 sshd[7639]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:52.657520 systemd-logind[1459]: Session 38 logged out. Waiting for processes to exit. Sep 13 00:08:52.658742 systemd[1]: sshd@38-188.245.230.74:22-147.75.109.163:50726.service: Deactivated successfully. Sep 13 00:08:52.662608 systemd[1]: session-38.scope: Deactivated successfully. Sep 13 00:08:52.664349 systemd-logind[1459]: Removed session 38. Sep 13 00:08:57.832712 systemd[1]: Started sshd@39-188.245.230.74:22-147.75.109.163:50732.service - OpenSSH per-connection server daemon (147.75.109.163:50732). Sep 13 00:08:58.812828 sshd[7655]: Accepted publickey for core from 147.75.109.163 port 50732 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:08:58.815854 sshd[7655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:08:58.822422 systemd-logind[1459]: New session 39 of user core. Sep 13 00:08:58.829325 systemd[1]: Started session-39.scope - Session 39 of User core. Sep 13 00:08:59.563658 sshd[7655]: pam_unix(sshd:session): session closed for user core Sep 13 00:08:59.570325 systemd-logind[1459]: Session 39 logged out. Waiting for processes to exit. Sep 13 00:08:59.571074 systemd[1]: sshd@39-188.245.230.74:22-147.75.109.163:50732.service: Deactivated successfully. Sep 13 00:08:59.574307 systemd[1]: session-39.scope: Deactivated successfully. Sep 13 00:08:59.576626 systemd-logind[1459]: Removed session 39. Sep 13 00:09:04.743958 systemd[1]: Started sshd@40-188.245.230.74:22-147.75.109.163:47194.service - OpenSSH per-connection server daemon (147.75.109.163:47194). Sep 13 00:09:05.722000 sshd[7691]: Accepted publickey for core from 147.75.109.163 port 47194 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:05.724635 sshd[7691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:05.732975 systemd-logind[1459]: New session 40 of user core. Sep 13 00:09:05.738950 systemd[1]: Started session-40.scope - Session 40 of User core. Sep 13 00:09:06.493383 sshd[7691]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:06.500833 systemd-logind[1459]: Session 40 logged out. Waiting for processes to exit. Sep 13 00:09:06.502548 systemd[1]: sshd@40-188.245.230.74:22-147.75.109.163:47194.service: Deactivated successfully. Sep 13 00:09:06.509457 systemd[1]: session-40.scope: Deactivated successfully. Sep 13 00:09:06.510957 systemd-logind[1459]: Removed session 40. Sep 13 00:09:11.662921 systemd[1]: Started sshd@41-188.245.230.74:22-147.75.109.163:49890.service - OpenSSH per-connection server daemon (147.75.109.163:49890). Sep 13 00:09:12.659970 sshd[7724]: Accepted publickey for core from 147.75.109.163 port 49890 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:12.661366 sshd[7724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:12.666885 systemd-logind[1459]: New session 41 of user core. Sep 13 00:09:12.677378 systemd[1]: Started session-41.scope - Session 41 of User core. Sep 13 00:09:13.420092 sshd[7724]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:13.426766 systemd[1]: sshd@41-188.245.230.74:22-147.75.109.163:49890.service: Deactivated successfully. Sep 13 00:09:13.426803 systemd-logind[1459]: Session 41 logged out. Waiting for processes to exit. Sep 13 00:09:13.429967 systemd[1]: session-41.scope: Deactivated successfully. Sep 13 00:09:13.430790 systemd-logind[1459]: Removed session 41. Sep 13 00:09:18.600391 systemd[1]: Started sshd@42-188.245.230.74:22-147.75.109.163:49900.service - OpenSSH per-connection server daemon (147.75.109.163:49900). Sep 13 00:09:19.591676 sshd[7775]: Accepted publickey for core from 147.75.109.163 port 49900 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:19.593780 sshd[7775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:19.598873 systemd-logind[1459]: New session 42 of user core. Sep 13 00:09:19.602152 systemd[1]: Started session-42.scope - Session 42 of User core. Sep 13 00:09:20.355580 sshd[7775]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:20.361102 systemd[1]: sshd@42-188.245.230.74:22-147.75.109.163:49900.service: Deactivated successfully. Sep 13 00:09:20.363752 systemd[1]: session-42.scope: Deactivated successfully. Sep 13 00:09:20.365397 systemd-logind[1459]: Session 42 logged out. Waiting for processes to exit. Sep 13 00:09:20.367311 systemd-logind[1459]: Removed session 42. Sep 13 00:09:20.890181 systemd[1]: Started sshd@43-188.245.230.74:22-195.62.49.203:59072.service - OpenSSH per-connection server daemon (195.62.49.203:59072). Sep 13 00:09:21.075654 sshd[7810]: Received disconnect from 195.62.49.203 port 59072:11: Bye Bye [preauth] Sep 13 00:09:21.075654 sshd[7810]: Disconnected from authenticating user root 195.62.49.203 port 59072 [preauth] Sep 13 00:09:21.079729 systemd[1]: sshd@43-188.245.230.74:22-195.62.49.203:59072.service: Deactivated successfully. Sep 13 00:09:25.532311 systemd[1]: Started sshd@44-188.245.230.74:22-147.75.109.163:56760.service - OpenSSH per-connection server daemon (147.75.109.163:56760). Sep 13 00:09:26.516964 sshd[7819]: Accepted publickey for core from 147.75.109.163 port 56760 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:26.519275 sshd[7819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:26.525060 systemd-logind[1459]: New session 43 of user core. Sep 13 00:09:26.533550 systemd[1]: Started session-43.scope - Session 43 of User core. Sep 13 00:09:27.279980 sshd[7819]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:27.284800 systemd-logind[1459]: Session 43 logged out. Waiting for processes to exit. Sep 13 00:09:27.286135 systemd[1]: sshd@44-188.245.230.74:22-147.75.109.163:56760.service: Deactivated successfully. Sep 13 00:09:27.288640 systemd[1]: session-43.scope: Deactivated successfully. Sep 13 00:09:27.290187 systemd-logind[1459]: Removed session 43. Sep 13 00:09:27.455506 systemd[1]: Started sshd@45-188.245.230.74:22-147.75.109.163:56768.service - OpenSSH per-connection server daemon (147.75.109.163:56768). Sep 13 00:09:28.436059 sshd[7833]: Accepted publickey for core from 147.75.109.163 port 56768 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:28.438016 sshd[7833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:28.442358 systemd-logind[1459]: New session 44 of user core. Sep 13 00:09:28.451436 systemd[1]: Started session-44.scope - Session 44 of User core. Sep 13 00:09:29.221599 sshd[7833]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:29.226982 systemd-logind[1459]: Session 44 logged out. Waiting for processes to exit. Sep 13 00:09:29.227195 systemd[1]: sshd@45-188.245.230.74:22-147.75.109.163:56768.service: Deactivated successfully. Sep 13 00:09:29.230571 systemd[1]: session-44.scope: Deactivated successfully. Sep 13 00:09:29.231773 systemd-logind[1459]: Removed session 44. Sep 13 00:09:29.395447 systemd[1]: Started sshd@46-188.245.230.74:22-147.75.109.163:56776.service - OpenSSH per-connection server daemon (147.75.109.163:56776). Sep 13 00:09:30.364477 sshd[7844]: Accepted publickey for core from 147.75.109.163 port 56776 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:30.366497 sshd[7844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:30.371264 systemd-logind[1459]: New session 45 of user core. Sep 13 00:09:30.377283 systemd[1]: Started session-45.scope - Session 45 of User core. Sep 13 00:09:31.115481 sshd[7844]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:31.121202 systemd[1]: sshd@46-188.245.230.74:22-147.75.109.163:56776.service: Deactivated successfully. Sep 13 00:09:31.123766 systemd[1]: session-45.scope: Deactivated successfully. Sep 13 00:09:31.124748 systemd-logind[1459]: Session 45 logged out. Waiting for processes to exit. Sep 13 00:09:31.127371 systemd-logind[1459]: Removed session 45. Sep 13 00:09:36.290356 systemd[1]: Started sshd@47-188.245.230.74:22-147.75.109.163:60330.service - OpenSSH per-connection server daemon (147.75.109.163:60330). Sep 13 00:09:37.274933 sshd[7878]: Accepted publickey for core from 147.75.109.163 port 60330 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:37.276725 sshd[7878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:37.283787 systemd-logind[1459]: New session 46 of user core. Sep 13 00:09:37.288209 systemd[1]: Started session-46.scope - Session 46 of User core. Sep 13 00:09:38.042363 sshd[7878]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:38.048919 systemd[1]: sshd@47-188.245.230.74:22-147.75.109.163:60330.service: Deactivated successfully. Sep 13 00:09:38.051744 systemd[1]: session-46.scope: Deactivated successfully. Sep 13 00:09:38.052638 systemd-logind[1459]: Session 46 logged out. Waiting for processes to exit. Sep 13 00:09:38.054971 systemd-logind[1459]: Removed session 46. Sep 13 00:09:43.231332 systemd[1]: Started sshd@48-188.245.230.74:22-147.75.109.163:55488.service - OpenSSH per-connection server daemon (147.75.109.163:55488). Sep 13 00:09:44.278764 sshd[7891]: Accepted publickey for core from 147.75.109.163 port 55488 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:44.280738 sshd[7891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:44.285848 systemd-logind[1459]: New session 47 of user core. Sep 13 00:09:44.292136 systemd[1]: Started session-47.scope - Session 47 of User core. Sep 13 00:09:45.075441 sshd[7891]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:45.080457 systemd-logind[1459]: Session 47 logged out. Waiting for processes to exit. Sep 13 00:09:45.081222 systemd[1]: sshd@48-188.245.230.74:22-147.75.109.163:55488.service: Deactivated successfully. Sep 13 00:09:45.083738 systemd[1]: session-47.scope: Deactivated successfully. Sep 13 00:09:45.085463 systemd-logind[1459]: Removed session 47. Sep 13 00:09:50.252398 systemd[1]: Started sshd@49-188.245.230.74:22-147.75.109.163:40454.service - OpenSSH per-connection server daemon (147.75.109.163:40454). Sep 13 00:09:51.233157 sshd[7966]: Accepted publickey for core from 147.75.109.163 port 40454 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:51.236189 sshd[7966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:51.241353 systemd-logind[1459]: New session 48 of user core. Sep 13 00:09:51.247299 systemd[1]: Started session-48.scope - Session 48 of User core. Sep 13 00:09:51.989362 sshd[7966]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:51.995163 systemd[1]: sshd@49-188.245.230.74:22-147.75.109.163:40454.service: Deactivated successfully. Sep 13 00:09:51.997877 systemd[1]: session-48.scope: Deactivated successfully. Sep 13 00:09:51.999231 systemd-logind[1459]: Session 48 logged out. Waiting for processes to exit. Sep 13 00:09:52.000268 systemd-logind[1459]: Removed session 48. Sep 13 00:09:57.166298 systemd[1]: Started sshd@50-188.245.230.74:22-147.75.109.163:40468.service - OpenSSH per-connection server daemon (147.75.109.163:40468). Sep 13 00:09:58.150533 sshd[7981]: Accepted publickey for core from 147.75.109.163 port 40468 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:09:58.153040 sshd[7981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:58.159105 systemd-logind[1459]: New session 49 of user core. Sep 13 00:09:58.163274 systemd[1]: Started session-49.scope - Session 49 of User core. Sep 13 00:09:58.913779 sshd[7981]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:58.918771 systemd-logind[1459]: Session 49 logged out. Waiting for processes to exit. Sep 13 00:09:58.919227 systemd[1]: sshd@50-188.245.230.74:22-147.75.109.163:40468.service: Deactivated successfully. Sep 13 00:09:58.922768 systemd[1]: session-49.scope: Deactivated successfully. Sep 13 00:09:58.924240 systemd-logind[1459]: Removed session 49. Sep 13 00:10:04.089351 systemd[1]: Started sshd@51-188.245.230.74:22-147.75.109.163:49872.service - OpenSSH per-connection server daemon (147.75.109.163:49872). Sep 13 00:10:05.070144 sshd[8014]: Accepted publickey for core from 147.75.109.163 port 49872 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:10:05.071272 sshd[8014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:05.078030 systemd-logind[1459]: New session 50 of user core. Sep 13 00:10:05.082480 systemd[1]: Started session-50.scope - Session 50 of User core. Sep 13 00:10:05.827307 sshd[8014]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:05.832981 systemd[1]: sshd@51-188.245.230.74:22-147.75.109.163:49872.service: Deactivated successfully. Sep 13 00:10:05.836225 systemd[1]: session-50.scope: Deactivated successfully. Sep 13 00:10:05.837873 systemd-logind[1459]: Session 50 logged out. Waiting for processes to exit. Sep 13 00:10:05.838848 systemd-logind[1459]: Removed session 50. Sep 13 00:10:11.009340 systemd[1]: Started sshd@52-188.245.230.74:22-147.75.109.163:58194.service - OpenSSH per-connection server daemon (147.75.109.163:58194). Sep 13 00:10:11.990314 sshd[8046]: Accepted publickey for core from 147.75.109.163 port 58194 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:10:11.993099 sshd[8046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:11.999257 systemd-logind[1459]: New session 51 of user core. Sep 13 00:10:12.006215 systemd[1]: Started session-51.scope - Session 51 of User core. Sep 13 00:10:12.753744 sshd[8046]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:12.759169 systemd[1]: sshd@52-188.245.230.74:22-147.75.109.163:58194.service: Deactivated successfully. Sep 13 00:10:12.762462 systemd[1]: session-51.scope: Deactivated successfully. Sep 13 00:10:12.763315 systemd-logind[1459]: Session 51 logged out. Waiting for processes to exit. Sep 13 00:10:12.764415 systemd-logind[1459]: Removed session 51. Sep 13 00:10:17.930345 systemd[1]: Started sshd@53-188.245.230.74:22-147.75.109.163:58206.service - OpenSSH per-connection server daemon (147.75.109.163:58206). Sep 13 00:10:18.924875 sshd[8097]: Accepted publickey for core from 147.75.109.163 port 58206 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:10:18.926925 sshd[8097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:18.933747 systemd-logind[1459]: New session 52 of user core. Sep 13 00:10:18.942137 systemd[1]: Started session-52.scope - Session 52 of User core. Sep 13 00:10:19.690517 sshd[8097]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:19.697539 systemd[1]: sshd@53-188.245.230.74:22-147.75.109.163:58206.service: Deactivated successfully. Sep 13 00:10:19.700263 systemd[1]: session-52.scope: Deactivated successfully. Sep 13 00:10:19.701451 systemd-logind[1459]: Session 52 logged out. Waiting for processes to exit. Sep 13 00:10:19.702624 systemd-logind[1459]: Removed session 52. Sep 13 00:10:23.296180 systemd[1]: Started sshd@54-188.245.230.74:22-195.62.49.203:51046.service - OpenSSH per-connection server daemon (195.62.49.203:51046). Sep 13 00:10:23.478588 sshd[8132]: Received disconnect from 195.62.49.203 port 51046:11: Bye Bye [preauth] Sep 13 00:10:23.478588 sshd[8132]: Disconnected from authenticating user root 195.62.49.203 port 51046 [preauth] Sep 13 00:10:23.482413 systemd[1]: sshd@54-188.245.230.74:22-195.62.49.203:51046.service: Deactivated successfully. Sep 13 00:10:24.866298 systemd[1]: Started sshd@55-188.245.230.74:22-147.75.109.163:49962.service - OpenSSH per-connection server daemon (147.75.109.163:49962). Sep 13 00:10:25.836051 sshd[8137]: Accepted publickey for core from 147.75.109.163 port 49962 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:10:25.838144 sshd[8137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:25.843466 systemd-logind[1459]: New session 53 of user core. Sep 13 00:10:25.853200 systemd[1]: Started session-53.scope - Session 53 of User core. Sep 13 00:10:26.586020 sshd[8137]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:26.591370 systemd-logind[1459]: Session 53 logged out. Waiting for processes to exit. Sep 13 00:10:26.592829 systemd[1]: sshd@55-188.245.230.74:22-147.75.109.163:49962.service: Deactivated successfully. Sep 13 00:10:26.596718 systemd[1]: session-53.scope: Deactivated successfully. Sep 13 00:10:26.598770 systemd-logind[1459]: Removed session 53. Sep 13 00:10:31.766350 systemd[1]: Started sshd@56-188.245.230.74:22-147.75.109.163:33942.service - OpenSSH per-connection server daemon (147.75.109.163:33942). Sep 13 00:10:32.754084 sshd[8172]: Accepted publickey for core from 147.75.109.163 port 33942 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:10:32.756100 sshd[8172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:32.762233 systemd-logind[1459]: New session 54 of user core. Sep 13 00:10:32.767197 systemd[1]: Started session-54.scope - Session 54 of User core. Sep 13 00:10:33.519528 sshd[8172]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:33.525638 systemd[1]: sshd@56-188.245.230.74:22-147.75.109.163:33942.service: Deactivated successfully. Sep 13 00:10:33.529472 systemd[1]: session-54.scope: Deactivated successfully. Sep 13 00:10:33.530413 systemd-logind[1459]: Session 54 logged out. Waiting for processes to exit. Sep 13 00:10:33.531457 systemd-logind[1459]: Removed session 54. Sep 13 00:10:38.699301 systemd[1]: Started sshd@57-188.245.230.74:22-147.75.109.163:33948.service - OpenSSH per-connection server daemon (147.75.109.163:33948). Sep 13 00:10:39.689596 sshd[8187]: Accepted publickey for core from 147.75.109.163 port 33948 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:10:39.691777 sshd[8187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:39.699024 systemd-logind[1459]: New session 55 of user core. Sep 13 00:10:39.707099 systemd[1]: Started session-55.scope - Session 55 of User core. Sep 13 00:10:40.491460 sshd[8187]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:40.495812 systemd-logind[1459]: Session 55 logged out. Waiting for processes to exit. Sep 13 00:10:40.497098 systemd[1]: sshd@57-188.245.230.74:22-147.75.109.163:33948.service: Deactivated successfully. Sep 13 00:10:40.499627 systemd[1]: session-55.scope: Deactivated successfully. Sep 13 00:10:40.501120 systemd-logind[1459]: Removed session 55. Sep 13 00:10:45.674346 systemd[1]: Started sshd@58-188.245.230.74:22-147.75.109.163:37174.service - OpenSSH per-connection server daemon (147.75.109.163:37174). Sep 13 00:10:46.657801 sshd[8199]: Accepted publickey for core from 147.75.109.163 port 37174 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:10:46.661929 sshd[8199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:46.667598 systemd-logind[1459]: New session 56 of user core. Sep 13 00:10:46.673123 systemd[1]: Started session-56.scope - Session 56 of User core. Sep 13 00:10:47.419754 sshd[8199]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:47.426755 systemd[1]: sshd@58-188.245.230.74:22-147.75.109.163:37174.service: Deactivated successfully. Sep 13 00:10:47.429299 systemd[1]: session-56.scope: Deactivated successfully. Sep 13 00:10:47.431252 systemd-logind[1459]: Session 56 logged out. Waiting for processes to exit. Sep 13 00:10:47.432317 systemd-logind[1459]: Removed session 56. Sep 13 00:10:52.597682 systemd[1]: Started sshd@59-188.245.230.74:22-147.75.109.163:36860.service - OpenSSH per-connection server daemon (147.75.109.163:36860). Sep 13 00:10:53.569205 sshd[8254]: Accepted publickey for core from 147.75.109.163 port 36860 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:10:53.572151 sshd[8254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:10:53.578135 systemd-logind[1459]: New session 57 of user core. Sep 13 00:10:53.583144 systemd[1]: Started session-57.scope - Session 57 of User core. Sep 13 00:10:54.328247 sshd[8254]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:54.334387 systemd[1]: sshd@59-188.245.230.74:22-147.75.109.163:36860.service: Deactivated successfully. Sep 13 00:10:54.334436 systemd-logind[1459]: Session 57 logged out. Waiting for processes to exit. Sep 13 00:10:54.338559 systemd[1]: session-57.scope: Deactivated successfully. Sep 13 00:10:54.339923 systemd-logind[1459]: Removed session 57. Sep 13 00:10:59.507330 systemd[1]: Started sshd@60-188.245.230.74:22-147.75.109.163:36872.service - OpenSSH per-connection server daemon (147.75.109.163:36872). Sep 13 00:11:00.489631 sshd[8266]: Accepted publickey for core from 147.75.109.163 port 36872 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:00.492243 sshd[8266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:00.500003 systemd-logind[1459]: New session 58 of user core. Sep 13 00:11:00.504186 systemd[1]: Started session-58.scope - Session 58 of User core. Sep 13 00:11:01.250173 sshd[8266]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:01.254945 systemd-logind[1459]: Session 58 logged out. Waiting for processes to exit. Sep 13 00:11:01.255476 systemd[1]: sshd@60-188.245.230.74:22-147.75.109.163:36872.service: Deactivated successfully. Sep 13 00:11:01.258359 systemd[1]: session-58.scope: Deactivated successfully. Sep 13 00:11:01.261590 systemd-logind[1459]: Removed session 58. Sep 13 00:11:06.429577 systemd[1]: Started sshd@61-188.245.230.74:22-147.75.109.163:58778.service - OpenSSH per-connection server daemon (147.75.109.163:58778). Sep 13 00:11:07.424761 sshd[8299]: Accepted publickey for core from 147.75.109.163 port 58778 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:07.426287 sshd[8299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:07.431379 systemd-logind[1459]: New session 59 of user core. Sep 13 00:11:07.439280 systemd[1]: Started session-59.scope - Session 59 of User core. Sep 13 00:11:08.204050 sshd[8299]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:08.209876 systemd-logind[1459]: Session 59 logged out. Waiting for processes to exit. Sep 13 00:11:08.210643 systemd[1]: sshd@61-188.245.230.74:22-147.75.109.163:58778.service: Deactivated successfully. Sep 13 00:11:08.215075 systemd[1]: session-59.scope: Deactivated successfully. Sep 13 00:11:08.216582 systemd-logind[1459]: Removed session 59. Sep 13 00:11:13.379000 systemd[1]: Started sshd@62-188.245.230.74:22-147.75.109.163:51334.service - OpenSSH per-connection server daemon (147.75.109.163:51334). Sep 13 00:11:14.365275 sshd[8333]: Accepted publickey for core from 147.75.109.163 port 51334 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:14.367016 sshd[8333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:14.371962 systemd-logind[1459]: New session 60 of user core. Sep 13 00:11:14.376083 systemd[1]: Started session-60.scope - Session 60 of User core. Sep 13 00:11:15.124644 sshd[8333]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:15.129564 systemd[1]: sshd@62-188.245.230.74:22-147.75.109.163:51334.service: Deactivated successfully. Sep 13 00:11:15.133142 systemd[1]: session-60.scope: Deactivated successfully. Sep 13 00:11:15.134379 systemd-logind[1459]: Session 60 logged out. Waiting for processes to exit. Sep 13 00:11:15.135447 systemd-logind[1459]: Removed session 60. Sep 13 00:11:20.298599 systemd[1]: Started sshd@63-188.245.230.74:22-147.75.109.163:42502.service - OpenSSH per-connection server daemon (147.75.109.163:42502). Sep 13 00:11:21.267013 sshd[8385]: Accepted publickey for core from 147.75.109.163 port 42502 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:21.269008 sshd[8385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:21.275246 systemd-logind[1459]: New session 61 of user core. Sep 13 00:11:21.281266 systemd[1]: Started session-61.scope - Session 61 of User core. Sep 13 00:11:22.017590 sshd[8385]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:22.022958 systemd-logind[1459]: Session 61 logged out. Waiting for processes to exit. Sep 13 00:11:22.023475 systemd[1]: sshd@63-188.245.230.74:22-147.75.109.163:42502.service: Deactivated successfully. Sep 13 00:11:22.027744 systemd[1]: session-61.scope: Deactivated successfully. Sep 13 00:11:22.028836 systemd-logind[1459]: Removed session 61. Sep 13 00:11:27.189198 systemd[1]: Started sshd@64-188.245.230.74:22-147.75.109.163:42516.service - OpenSSH per-connection server daemon (147.75.109.163:42516). Sep 13 00:11:27.954458 systemd[1]: Started sshd@65-188.245.230.74:22-195.62.49.203:48528.service - OpenSSH per-connection server daemon (195.62.49.203:48528). Sep 13 00:11:28.134617 sshd[8443]: Received disconnect from 195.62.49.203 port 48528:11: Bye Bye [preauth] Sep 13 00:11:28.134617 sshd[8443]: Disconnected from authenticating user root 195.62.49.203 port 48528 [preauth] Sep 13 00:11:28.138655 systemd[1]: sshd@65-188.245.230.74:22-195.62.49.203:48528.service: Deactivated successfully. Sep 13 00:11:28.166542 sshd[8440]: Accepted publickey for core from 147.75.109.163 port 42516 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:28.168509 sshd[8440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:28.175692 systemd-logind[1459]: New session 62 of user core. Sep 13 00:11:28.185224 systemd[1]: Started session-62.scope - Session 62 of User core. Sep 13 00:11:28.917416 sshd[8440]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:28.923105 systemd[1]: sshd@64-188.245.230.74:22-147.75.109.163:42516.service: Deactivated successfully. Sep 13 00:11:28.926238 systemd[1]: session-62.scope: Deactivated successfully. Sep 13 00:11:28.927467 systemd-logind[1459]: Session 62 logged out. Waiting for processes to exit. Sep 13 00:11:28.930796 systemd-logind[1459]: Removed session 62. Sep 13 00:11:34.097350 systemd[1]: Started sshd@66-188.245.230.74:22-147.75.109.163:58994.service - OpenSSH per-connection server daemon (147.75.109.163:58994). Sep 13 00:11:35.086253 sshd[8480]: Accepted publickey for core from 147.75.109.163 port 58994 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:35.087524 sshd[8480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:35.093206 systemd-logind[1459]: New session 63 of user core. Sep 13 00:11:35.102175 systemd[1]: Started session-63.scope - Session 63 of User core. Sep 13 00:11:35.852951 sshd[8480]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:35.858608 systemd[1]: sshd@66-188.245.230.74:22-147.75.109.163:58994.service: Deactivated successfully. Sep 13 00:11:35.861533 systemd[1]: session-63.scope: Deactivated successfully. Sep 13 00:11:35.862701 systemd-logind[1459]: Session 63 logged out. Waiting for processes to exit. Sep 13 00:11:35.864310 systemd-logind[1459]: Removed session 63. Sep 13 00:11:41.032166 systemd[1]: Started sshd@67-188.245.230.74:22-147.75.109.163:33924.service - OpenSSH per-connection server daemon (147.75.109.163:33924). Sep 13 00:11:42.033682 sshd[8494]: Accepted publickey for core from 147.75.109.163 port 33924 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:42.036516 sshd[8494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:42.045413 systemd-logind[1459]: New session 64 of user core. Sep 13 00:11:42.048741 systemd[1]: Started session-64.scope - Session 64 of User core. Sep 13 00:11:42.847433 sshd[8494]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:42.850490 systemd-logind[1459]: Session 64 logged out. Waiting for processes to exit. Sep 13 00:11:42.852275 systemd[1]: sshd@67-188.245.230.74:22-147.75.109.163:33924.service: Deactivated successfully. Sep 13 00:11:42.855806 systemd[1]: session-64.scope: Deactivated successfully. Sep 13 00:11:42.858881 systemd-logind[1459]: Removed session 64. Sep 13 00:11:48.025235 systemd[1]: Started sshd@68-188.245.230.74:22-147.75.109.163:33932.service - OpenSSH per-connection server daemon (147.75.109.163:33932). Sep 13 00:11:49.003228 sshd[8556]: Accepted publickey for core from 147.75.109.163 port 33932 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:49.005821 sshd[8556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:49.012364 systemd-logind[1459]: New session 65 of user core. Sep 13 00:11:49.020247 systemd[1]: Started session-65.scope - Session 65 of User core. Sep 13 00:11:49.758275 sshd[8556]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:49.762739 systemd-logind[1459]: Session 65 logged out. Waiting for processes to exit. Sep 13 00:11:49.762939 systemd[1]: sshd@68-188.245.230.74:22-147.75.109.163:33932.service: Deactivated successfully. Sep 13 00:11:49.765656 systemd[1]: session-65.scope: Deactivated successfully. Sep 13 00:11:49.768463 systemd-logind[1459]: Removed session 65. Sep 13 00:11:54.947300 systemd[1]: Started sshd@69-188.245.230.74:22-147.75.109.163:40816.service - OpenSSH per-connection server daemon (147.75.109.163:40816). Sep 13 00:11:55.987413 sshd[8571]: Accepted publickey for core from 147.75.109.163 port 40816 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:11:55.989636 sshd[8571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:55.994330 systemd-logind[1459]: New session 66 of user core. Sep 13 00:11:56.003252 systemd[1]: Started session-66.scope - Session 66 of User core. Sep 13 00:11:56.779569 sshd[8571]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:56.785629 systemd[1]: sshd@69-188.245.230.74:22-147.75.109.163:40816.service: Deactivated successfully. Sep 13 00:11:56.788629 systemd[1]: session-66.scope: Deactivated successfully. Sep 13 00:11:56.790765 systemd-logind[1459]: Session 66 logged out. Waiting for processes to exit. Sep 13 00:11:56.792435 systemd-logind[1459]: Removed session 66. Sep 13 00:12:01.961435 systemd[1]: Started sshd@70-188.245.230.74:22-147.75.109.163:38300.service - OpenSSH per-connection server daemon (147.75.109.163:38300). Sep 13 00:12:02.947636 sshd[8607]: Accepted publickey for core from 147.75.109.163 port 38300 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:02.949415 sshd[8607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:02.957774 systemd-logind[1459]: New session 67 of user core. Sep 13 00:12:02.961189 systemd[1]: Started session-67.scope - Session 67 of User core. Sep 13 00:12:03.717055 sshd[8607]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:03.725232 systemd[1]: sshd@70-188.245.230.74:22-147.75.109.163:38300.service: Deactivated successfully. Sep 13 00:12:03.728084 systemd[1]: session-67.scope: Deactivated successfully. Sep 13 00:12:03.730535 systemd-logind[1459]: Session 67 logged out. Waiting for processes to exit. Sep 13 00:12:03.731880 systemd-logind[1459]: Removed session 67. Sep 13 00:12:08.893299 systemd[1]: Started sshd@71-188.245.230.74:22-147.75.109.163:38316.service - OpenSSH per-connection server daemon (147.75.109.163:38316). Sep 13 00:12:09.900218 sshd[8636]: Accepted publickey for core from 147.75.109.163 port 38316 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:09.901188 sshd[8636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:09.916254 systemd-logind[1459]: New session 68 of user core. Sep 13 00:12:09.922065 systemd[1]: Started session-68.scope - Session 68 of User core. Sep 13 00:12:10.710890 sshd[8636]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:10.715249 systemd[1]: sshd@71-188.245.230.74:22-147.75.109.163:38316.service: Deactivated successfully. Sep 13 00:12:10.719217 systemd[1]: session-68.scope: Deactivated successfully. Sep 13 00:12:10.724156 systemd-logind[1459]: Session 68 logged out. Waiting for processes to exit. Sep 13 00:12:10.725968 systemd-logind[1459]: Removed session 68. Sep 13 00:12:15.894031 systemd[1]: Started sshd@72-188.245.230.74:22-147.75.109.163:42800.service - OpenSSH per-connection server daemon (147.75.109.163:42800). Sep 13 00:12:16.870470 sshd[8653]: Accepted publickey for core from 147.75.109.163 port 42800 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:16.872241 sshd[8653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:16.876866 systemd-logind[1459]: New session 69 of user core. Sep 13 00:12:16.889220 systemd[1]: Started session-69.scope - Session 69 of User core. Sep 13 00:12:17.638544 sshd[8653]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:17.645324 systemd[1]: sshd@72-188.245.230.74:22-147.75.109.163:42800.service: Deactivated successfully. Sep 13 00:12:17.648647 systemd[1]: session-69.scope: Deactivated successfully. Sep 13 00:12:17.649833 systemd-logind[1459]: Session 69 logged out. Waiting for processes to exit. Sep 13 00:12:17.651317 systemd-logind[1459]: Removed session 69. Sep 13 00:12:22.812251 systemd[1]: Started sshd@73-188.245.230.74:22-147.75.109.163:52034.service - OpenSSH per-connection server daemon (147.75.109.163:52034). Sep 13 00:12:23.794776 sshd[8725]: Accepted publickey for core from 147.75.109.163 port 52034 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:23.796820 sshd[8725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:23.802088 systemd-logind[1459]: New session 70 of user core. Sep 13 00:12:23.808143 systemd[1]: Started session-70.scope - Session 70 of User core. Sep 13 00:12:24.549743 sshd[8725]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:24.555357 systemd-logind[1459]: Session 70 logged out. Waiting for processes to exit. Sep 13 00:12:24.555964 systemd[1]: sshd@73-188.245.230.74:22-147.75.109.163:52034.service: Deactivated successfully. Sep 13 00:12:24.558779 systemd[1]: session-70.scope: Deactivated successfully. Sep 13 00:12:24.561707 systemd-logind[1459]: Removed session 70. Sep 13 00:12:29.726509 systemd[1]: Started sshd@74-188.245.230.74:22-147.75.109.163:52044.service - OpenSSH per-connection server daemon (147.75.109.163:52044). Sep 13 00:12:30.708646 sshd[8740]: Accepted publickey for core from 147.75.109.163 port 52044 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:30.711484 sshd[8740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:30.717424 systemd-logind[1459]: New session 71 of user core. Sep 13 00:12:30.720152 systemd[1]: Started session-71.scope - Session 71 of User core. Sep 13 00:12:31.464351 sshd[8740]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:31.469433 systemd[1]: sshd@74-188.245.230.74:22-147.75.109.163:52044.service: Deactivated successfully. Sep 13 00:12:31.472657 systemd[1]: session-71.scope: Deactivated successfully. Sep 13 00:12:31.479407 systemd-logind[1459]: Session 71 logged out. Waiting for processes to exit. Sep 13 00:12:31.481992 systemd-logind[1459]: Removed session 71. Sep 13 00:12:32.310385 systemd[1]: Started sshd@75-188.245.230.74:22-195.62.49.203:45780.service - OpenSSH per-connection server daemon (195.62.49.203:45780). Sep 13 00:12:32.489194 sshd[8775]: Received disconnect from 195.62.49.203 port 45780:11: Bye Bye [preauth] Sep 13 00:12:32.489194 sshd[8775]: Disconnected from authenticating user root 195.62.49.203 port 45780 [preauth] Sep 13 00:12:32.492774 systemd[1]: sshd@75-188.245.230.74:22-195.62.49.203:45780.service: Deactivated successfully. Sep 13 00:12:36.645328 systemd[1]: Started sshd@76-188.245.230.74:22-147.75.109.163:56514.service - OpenSSH per-connection server daemon (147.75.109.163:56514). Sep 13 00:12:37.640646 sshd[8780]: Accepted publickey for core from 147.75.109.163 port 56514 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:37.642664 sshd[8780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:37.649511 systemd-logind[1459]: New session 72 of user core. Sep 13 00:12:37.655629 systemd[1]: Started session-72.scope - Session 72 of User core. Sep 13 00:12:38.418923 sshd[8780]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:38.424113 systemd[1]: sshd@76-188.245.230.74:22-147.75.109.163:56514.service: Deactivated successfully. Sep 13 00:12:38.427645 systemd[1]: session-72.scope: Deactivated successfully. Sep 13 00:12:38.430303 systemd-logind[1459]: Session 72 logged out. Waiting for processes to exit. Sep 13 00:12:38.432116 systemd-logind[1459]: Removed session 72. Sep 13 00:12:43.593432 systemd[1]: Started sshd@77-188.245.230.74:22-147.75.109.163:45902.service - OpenSSH per-connection server daemon (147.75.109.163:45902). Sep 13 00:12:44.575757 sshd[8793]: Accepted publickey for core from 147.75.109.163 port 45902 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:44.577452 sshd[8793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:44.582666 systemd-logind[1459]: New session 73 of user core. Sep 13 00:12:44.590245 systemd[1]: Started session-73.scope - Session 73 of User core. Sep 13 00:12:45.347097 sshd[8793]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:45.354016 systemd-logind[1459]: Session 73 logged out. Waiting for processes to exit. Sep 13 00:12:45.356442 systemd[1]: sshd@77-188.245.230.74:22-147.75.109.163:45902.service: Deactivated successfully. Sep 13 00:12:45.361494 systemd[1]: session-73.scope: Deactivated successfully. Sep 13 00:12:45.365603 systemd-logind[1459]: Removed session 73. Sep 13 00:12:47.660639 systemd[1]: run-containerd-runc-k8s.io-278c9f1e6083d739e99243bc48ddb5fc4ff48da38e27bab74bc4ae2105d6813b-runc.AthRrF.mount: Deactivated successfully. Sep 13 00:12:50.519221 systemd[1]: Started sshd@78-188.245.230.74:22-147.75.109.163:54354.service - OpenSSH per-connection server daemon (147.75.109.163:54354). Sep 13 00:12:51.490094 sshd[8844]: Accepted publickey for core from 147.75.109.163 port 54354 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:51.492158 sshd[8844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:51.499600 systemd-logind[1459]: New session 74 of user core. Sep 13 00:12:51.504170 systemd[1]: Started session-74.scope - Session 74 of User core. Sep 13 00:12:52.250473 sshd[8844]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:52.255051 systemd[1]: sshd@78-188.245.230.74:22-147.75.109.163:54354.service: Deactivated successfully. Sep 13 00:12:52.258412 systemd[1]: session-74.scope: Deactivated successfully. Sep 13 00:12:52.259711 systemd-logind[1459]: Session 74 logged out. Waiting for processes to exit. Sep 13 00:12:52.261427 systemd-logind[1459]: Removed session 74. Sep 13 00:12:57.427267 systemd[1]: Started sshd@79-188.245.230.74:22-147.75.109.163:54366.service - OpenSSH per-connection server daemon (147.75.109.163:54366). Sep 13 00:12:58.409842 sshd[8880]: Accepted publickey for core from 147.75.109.163 port 54366 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:12:58.412054 sshd[8880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:58.418811 systemd-logind[1459]: New session 75 of user core. Sep 13 00:12:58.425125 systemd[1]: Started session-75.scope - Session 75 of User core. Sep 13 00:12:59.182396 sshd[8880]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:59.188959 systemd-logind[1459]: Session 75 logged out. Waiting for processes to exit. Sep 13 00:12:59.189745 systemd[1]: sshd@79-188.245.230.74:22-147.75.109.163:54366.service: Deactivated successfully. Sep 13 00:12:59.194779 systemd[1]: session-75.scope: Deactivated successfully. Sep 13 00:12:59.197735 systemd-logind[1459]: Removed session 75. Sep 13 00:13:00.143084 systemd[1]: Started sshd@80-188.245.230.74:22-213.209.143.117:53250.service - OpenSSH per-connection server daemon (213.209.143.117:53250). Sep 13 00:13:00.155276 sshd[8893]: banner exchange: Connection from 213.209.143.117 port 53250: invalid format Sep 13 00:13:00.157219 systemd[1]: sshd@80-188.245.230.74:22-213.209.143.117:53250.service: Deactivated successfully. Sep 13 00:13:04.361362 systemd[1]: Started sshd@81-188.245.230.74:22-147.75.109.163:40120.service - OpenSSH per-connection server daemon (147.75.109.163:40120). Sep 13 00:13:05.341105 sshd[8920]: Accepted publickey for core from 147.75.109.163 port 40120 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:05.343489 sshd[8920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:05.348451 systemd-logind[1459]: New session 76 of user core. Sep 13 00:13:05.356304 systemd[1]: Started session-76.scope - Session 76 of User core. Sep 13 00:13:06.099293 sshd[8920]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:06.104904 systemd[1]: sshd@81-188.245.230.74:22-147.75.109.163:40120.service: Deactivated successfully. Sep 13 00:13:06.108150 systemd[1]: session-76.scope: Deactivated successfully. Sep 13 00:13:06.109372 systemd-logind[1459]: Session 76 logged out. Waiting for processes to exit. Sep 13 00:13:06.110662 systemd-logind[1459]: Removed session 76. Sep 13 00:13:08.873397 systemd[1]: run-containerd-runc-k8s.io-761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4-runc.r5M3a6.mount: Deactivated successfully. Sep 13 00:13:11.281355 systemd[1]: Started sshd@82-188.245.230.74:22-147.75.109.163:53770.service - OpenSSH per-connection server daemon (147.75.109.163:53770). Sep 13 00:13:12.263788 sshd[8952]: Accepted publickey for core from 147.75.109.163 port 53770 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:12.267015 sshd[8952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:12.274375 systemd-logind[1459]: New session 77 of user core. Sep 13 00:13:12.279147 systemd[1]: Started session-77.scope - Session 77 of User core. Sep 13 00:13:13.047014 sshd[8952]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:13.052573 systemd-logind[1459]: Session 77 logged out. Waiting for processes to exit. Sep 13 00:13:13.052707 systemd[1]: sshd@82-188.245.230.74:22-147.75.109.163:53770.service: Deactivated successfully. Sep 13 00:13:13.054719 systemd[1]: session-77.scope: Deactivated successfully. Sep 13 00:13:13.058136 systemd-logind[1459]: Removed session 77. Sep 13 00:13:18.236358 systemd[1]: Started sshd@83-188.245.230.74:22-147.75.109.163:53782.service - OpenSSH per-connection server daemon (147.75.109.163:53782). Sep 13 00:13:19.224753 sshd[9005]: Accepted publickey for core from 147.75.109.163 port 53782 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:19.227409 sshd[9005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:19.232892 systemd-logind[1459]: New session 78 of user core. Sep 13 00:13:19.238117 systemd[1]: Started session-78.scope - Session 78 of User core. Sep 13 00:13:19.984234 sshd[9005]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:19.988546 systemd[1]: sshd@83-188.245.230.74:22-147.75.109.163:53782.service: Deactivated successfully. Sep 13 00:13:19.994875 systemd[1]: session-78.scope: Deactivated successfully. Sep 13 00:13:19.998375 systemd-logind[1459]: Session 78 logged out. Waiting for processes to exit. Sep 13 00:13:20.000632 systemd-logind[1459]: Removed session 78. Sep 13 00:13:22.890990 update_engine[1461]: I20250913 00:13:22.890413 1461 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:13:22.890990 update_engine[1461]: I20250913 00:13:22.890479 1461 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:13:22.890990 update_engine[1461]: I20250913 00:13:22.890842 1461 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:13:22.891628 update_engine[1461]: I20250913 00:13:22.891595 1461 omaha_request_params.cc:62] Current group set to lts Sep 13 00:13:22.892021 update_engine[1461]: I20250913 00:13:22.891739 1461 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:13:22.892021 update_engine[1461]: I20250913 00:13:22.891762 1461 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:13:22.892021 update_engine[1461]: I20250913 00:13:22.891788 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:13:22.894300 update_engine[1461]: I20250913 00:13:22.893218 1461 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:13:22.894300 update_engine[1461]: I20250913 00:13:22.893349 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:13:22.894300 update_engine[1461]: I20250913 00:13:22.893365 1461 omaha_request_action.cc:272] Request: Sep 13 00:13:22.894300 update_engine[1461]: Sep 13 00:13:22.894300 update_engine[1461]: Sep 13 00:13:22.894300 update_engine[1461]: Sep 13 00:13:22.894300 update_engine[1461]: Sep 13 00:13:22.894300 update_engine[1461]: Sep 13 00:13:22.894300 update_engine[1461]: Sep 13 00:13:22.894300 update_engine[1461]: Sep 13 00:13:22.894300 update_engine[1461]: Sep 13 00:13:22.894300 update_engine[1461]: I20250913 00:13:22.893378 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:13:22.899841 update_engine[1461]: I20250913 00:13:22.899287 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:13:22.899841 update_engine[1461]: I20250913 00:13:22.899633 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:13:22.901196 update_engine[1461]: E20250913 00:13:22.900797 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:13:22.901267 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:13:22.901571 update_engine[1461]: I20250913 00:13:22.901504 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:13:25.159178 systemd[1]: Started sshd@84-188.245.230.74:22-147.75.109.163:43472.service - OpenSSH per-connection server daemon (147.75.109.163:43472). Sep 13 00:13:26.137352 sshd[9041]: Accepted publickey for core from 147.75.109.163 port 43472 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:26.139338 sshd[9041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:26.146016 systemd-logind[1459]: New session 79 of user core. Sep 13 00:13:26.151189 systemd[1]: Started session-79.scope - Session 79 of User core. Sep 13 00:13:26.898639 sshd[9041]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:26.903346 systemd[1]: sshd@84-188.245.230.74:22-147.75.109.163:43472.service: Deactivated successfully. Sep 13 00:13:26.906748 systemd[1]: session-79.scope: Deactivated successfully. Sep 13 00:13:26.907804 systemd-logind[1459]: Session 79 logged out. Waiting for processes to exit. Sep 13 00:13:26.909294 systemd-logind[1459]: Removed session 79. Sep 13 00:13:32.088489 systemd[1]: Started sshd@85-188.245.230.74:22-147.75.109.163:54436.service - OpenSSH per-connection server daemon (147.75.109.163:54436). Sep 13 00:13:32.893007 update_engine[1461]: I20250913 00:13:32.892683 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:13:32.893500 update_engine[1461]: I20250913 00:13:32.893148 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:13:32.893552 update_engine[1461]: I20250913 00:13:32.893493 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:13:32.894538 update_engine[1461]: E20250913 00:13:32.894445 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:13:32.894747 update_engine[1461]: I20250913 00:13:32.894546 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:13:33.069467 sshd[9075]: Accepted publickey for core from 147.75.109.163 port 54436 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:33.072039 sshd[9075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:33.079014 systemd-logind[1459]: New session 80 of user core. Sep 13 00:13:33.084129 systemd[1]: Started session-80.scope - Session 80 of User core. Sep 13 00:13:33.823888 sshd[9075]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:33.830791 systemd[1]: sshd@85-188.245.230.74:22-147.75.109.163:54436.service: Deactivated successfully. Sep 13 00:13:33.834320 systemd[1]: session-80.scope: Deactivated successfully. Sep 13 00:13:33.835797 systemd-logind[1459]: Session 80 logged out. Waiting for processes to exit. Sep 13 00:13:33.838592 systemd-logind[1459]: Removed session 80. Sep 13 00:13:33.998334 systemd[1]: Started sshd@86-188.245.230.74:22-147.75.109.163:54448.service - OpenSSH per-connection server daemon (147.75.109.163:54448). Sep 13 00:13:34.951278 systemd[1]: Started sshd@87-188.245.230.74:22-195.62.49.203:47424.service - OpenSSH per-connection server daemon (195.62.49.203:47424). Sep 13 00:13:34.975598 sshd[9087]: Accepted publickey for core from 147.75.109.163 port 54448 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:34.977547 sshd[9087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:34.984354 systemd-logind[1459]: New session 81 of user core. Sep 13 00:13:34.987126 systemd[1]: Started session-81.scope - Session 81 of User core. Sep 13 00:13:35.131184 sshd[9090]: Received disconnect from 195.62.49.203 port 47424:11: Bye Bye [preauth] Sep 13 00:13:35.131184 sshd[9090]: Disconnected from authenticating user root 195.62.49.203 port 47424 [preauth] Sep 13 00:13:35.135366 systemd[1]: sshd@87-188.245.230.74:22-195.62.49.203:47424.service: Deactivated successfully. Sep 13 00:13:35.876558 sshd[9087]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:35.881824 systemd[1]: sshd@86-188.245.230.74:22-147.75.109.163:54448.service: Deactivated successfully. Sep 13 00:13:35.886111 systemd[1]: session-81.scope: Deactivated successfully. Sep 13 00:13:35.891135 systemd-logind[1459]: Session 81 logged out. Waiting for processes to exit. Sep 13 00:13:35.892571 systemd-logind[1459]: Removed session 81. Sep 13 00:13:36.056354 systemd[1]: Started sshd@88-188.245.230.74:22-147.75.109.163:54452.service - OpenSSH per-connection server daemon (147.75.109.163:54452). Sep 13 00:13:37.034487 sshd[9103]: Accepted publickey for core from 147.75.109.163 port 54452 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:37.036637 sshd[9103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:37.042046 systemd-logind[1459]: New session 82 of user core. Sep 13 00:13:37.044204 systemd[1]: Started session-82.scope - Session 82 of User core. Sep 13 00:13:39.540149 sshd[9103]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:39.545826 systemd[1]: sshd@88-188.245.230.74:22-147.75.109.163:54452.service: Deactivated successfully. Sep 13 00:13:39.548121 systemd[1]: session-82.scope: Deactivated successfully. Sep 13 00:13:39.551350 systemd-logind[1459]: Session 82 logged out. Waiting for processes to exit. Sep 13 00:13:39.553133 systemd-logind[1459]: Removed session 82. Sep 13 00:13:39.719222 systemd[1]: Started sshd@89-188.245.230.74:22-147.75.109.163:54454.service - OpenSSH per-connection server daemon (147.75.109.163:54454). Sep 13 00:13:40.703639 sshd[9121]: Accepted publickey for core from 147.75.109.163 port 54454 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:40.705549 sshd[9121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:40.712607 systemd-logind[1459]: New session 83 of user core. Sep 13 00:13:40.719215 systemd[1]: Started session-83.scope - Session 83 of User core. Sep 13 00:13:41.591621 sshd[9121]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:41.597583 systemd-logind[1459]: Session 83 logged out. Waiting for processes to exit. Sep 13 00:13:41.598407 systemd[1]: sshd@89-188.245.230.74:22-147.75.109.163:54454.service: Deactivated successfully. Sep 13 00:13:41.602471 systemd[1]: session-83.scope: Deactivated successfully. Sep 13 00:13:41.604056 systemd-logind[1459]: Removed session 83. Sep 13 00:13:41.769357 systemd[1]: Started sshd@90-188.245.230.74:22-147.75.109.163:47368.service - OpenSSH per-connection server daemon (147.75.109.163:47368). Sep 13 00:13:42.750581 sshd[9132]: Accepted publickey for core from 147.75.109.163 port 47368 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:42.752747 sshd[9132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:42.757506 systemd-logind[1459]: New session 84 of user core. Sep 13 00:13:42.763105 systemd[1]: Started session-84.scope - Session 84 of User core. Sep 13 00:13:42.888640 update_engine[1461]: I20250913 00:13:42.887935 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:13:42.888640 update_engine[1461]: I20250913 00:13:42.888270 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:13:42.888640 update_engine[1461]: I20250913 00:13:42.888578 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:13:42.889995 update_engine[1461]: E20250913 00:13:42.889853 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:13:42.889995 update_engine[1461]: I20250913 00:13:42.889966 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:13:43.506175 sshd[9132]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:43.511223 systemd-logind[1459]: Session 84 logged out. Waiting for processes to exit. Sep 13 00:13:43.512271 systemd[1]: sshd@90-188.245.230.74:22-147.75.109.163:47368.service: Deactivated successfully. Sep 13 00:13:43.515076 systemd[1]: session-84.scope: Deactivated successfully. Sep 13 00:13:43.517140 systemd-logind[1459]: Removed session 84. Sep 13 00:13:48.683908 systemd[1]: Started sshd@91-188.245.230.74:22-147.75.109.163:47376.service - OpenSSH per-connection server daemon (147.75.109.163:47376). Sep 13 00:13:49.676311 sshd[9185]: Accepted publickey for core from 147.75.109.163 port 47376 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:49.678279 sshd[9185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:49.686626 systemd-logind[1459]: New session 85 of user core. Sep 13 00:13:49.690087 systemd[1]: Started session-85.scope - Session 85 of User core. Sep 13 00:13:50.452841 sshd[9185]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:50.460348 systemd[1]: session-85.scope: Deactivated successfully. Sep 13 00:13:50.463388 systemd[1]: sshd@91-188.245.230.74:22-147.75.109.163:47376.service: Deactivated successfully. Sep 13 00:13:50.469507 systemd-logind[1459]: Session 85 logged out. Waiting for processes to exit. Sep 13 00:13:50.470634 systemd-logind[1459]: Removed session 85. Sep 13 00:13:52.890994 update_engine[1461]: I20250913 00:13:52.890267 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:13:52.890994 update_engine[1461]: I20250913 00:13:52.890607 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:13:52.890994 update_engine[1461]: I20250913 00:13:52.890868 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:13:52.891804 update_engine[1461]: E20250913 00:13:52.891742 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:13:52.891877 update_engine[1461]: I20250913 00:13:52.891817 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:13:52.891877 update_engine[1461]: I20250913 00:13:52.891830 1461 omaha_request_action.cc:617] Omaha request response: Sep 13 00:13:52.892008 update_engine[1461]: E20250913 00:13:52.891981 1461 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:13:52.892059 update_engine[1461]: I20250913 00:13:52.892015 1461 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:13:52.892059 update_engine[1461]: I20250913 00:13:52.892025 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:13:52.892059 update_engine[1461]: I20250913 00:13:52.892032 1461 update_attempter.cc:306] Processing Done. Sep 13 00:13:52.892059 update_engine[1461]: E20250913 00:13:52.892050 1461 update_attempter.cc:619] Update failed. Sep 13 00:13:52.892400 update_engine[1461]: I20250913 00:13:52.892058 1461 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:13:52.892400 update_engine[1461]: I20250913 00:13:52.892066 1461 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:13:52.892400 update_engine[1461]: I20250913 00:13:52.892076 1461 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:13:52.892400 update_engine[1461]: I20250913 00:13:52.892161 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:13:52.892400 update_engine[1461]: I20250913 00:13:52.892191 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:13:52.892400 update_engine[1461]: I20250913 00:13:52.892200 1461 omaha_request_action.cc:272] Request: Sep 13 00:13:52.892400 update_engine[1461]: Sep 13 00:13:52.892400 update_engine[1461]: Sep 13 00:13:52.892400 update_engine[1461]: Sep 13 00:13:52.892400 update_engine[1461]: Sep 13 00:13:52.892400 update_engine[1461]: Sep 13 00:13:52.892400 update_engine[1461]: Sep 13 00:13:52.892400 update_engine[1461]: I20250913 00:13:52.892210 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:13:52.892400 update_engine[1461]: I20250913 00:13:52.892398 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:13:52.893297 update_engine[1461]: I20250913 00:13:52.892599 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:13:52.893365 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:13:52.893819 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:13:52.893887 update_engine[1461]: E20250913 00:13:52.893406 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:13:52.893887 update_engine[1461]: I20250913 00:13:52.893452 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:13:52.893887 update_engine[1461]: I20250913 00:13:52.893460 1461 omaha_request_action.cc:617] Omaha request response: Sep 13 00:13:52.893887 update_engine[1461]: I20250913 00:13:52.893465 1461 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:13:52.893887 update_engine[1461]: I20250913 00:13:52.893471 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:13:52.893887 update_engine[1461]: I20250913 00:13:52.893476 1461 update_attempter.cc:306] Processing Done. Sep 13 00:13:52.893887 update_engine[1461]: I20250913 00:13:52.893481 1461 update_attempter.cc:310] Error event sent. Sep 13 00:13:52.893887 update_engine[1461]: I20250913 00:13:52.893488 1461 update_check_scheduler.cc:74] Next update check in 40m7s Sep 13 00:13:55.629481 systemd[1]: Started sshd@92-188.245.230.74:22-147.75.109.163:33230.service - OpenSSH per-connection server daemon (147.75.109.163:33230). Sep 13 00:13:56.615475 sshd[9200]: Accepted publickey for core from 147.75.109.163 port 33230 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:13:56.617749 sshd[9200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:13:56.623594 systemd-logind[1459]: New session 86 of user core. Sep 13 00:13:56.627260 systemd[1]: Started session-86.scope - Session 86 of User core. Sep 13 00:13:57.374239 sshd[9200]: pam_unix(sshd:session): session closed for user core Sep 13 00:13:57.379089 systemd-logind[1459]: Session 86 logged out. Waiting for processes to exit. Sep 13 00:13:57.381018 systemd[1]: sshd@92-188.245.230.74:22-147.75.109.163:33230.service: Deactivated successfully. Sep 13 00:13:57.383433 systemd[1]: session-86.scope: Deactivated successfully. Sep 13 00:13:57.385784 systemd-logind[1459]: Removed session 86. Sep 13 00:14:02.550466 systemd[1]: Started sshd@93-188.245.230.74:22-147.75.109.163:42030.service - OpenSSH per-connection server daemon (147.75.109.163:42030). Sep 13 00:14:03.518596 sshd[9235]: Accepted publickey for core from 147.75.109.163 port 42030 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:03.521513 sshd[9235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:03.535600 systemd-logind[1459]: New session 87 of user core. Sep 13 00:14:03.547214 systemd[1]: Started session-87.scope - Session 87 of User core. Sep 13 00:14:04.261673 sshd[9235]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:04.267662 systemd[1]: sshd@93-188.245.230.74:22-147.75.109.163:42030.service: Deactivated successfully. Sep 13 00:14:04.270379 systemd[1]: session-87.scope: Deactivated successfully. Sep 13 00:14:04.271690 systemd-logind[1459]: Session 87 logged out. Waiting for processes to exit. Sep 13 00:14:04.273137 systemd-logind[1459]: Removed session 87. Sep 13 00:14:09.441388 systemd[1]: Started sshd@94-188.245.230.74:22-147.75.109.163:42038.service - OpenSSH per-connection server daemon (147.75.109.163:42038). Sep 13 00:14:10.425665 sshd[9277]: Accepted publickey for core from 147.75.109.163 port 42038 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:10.427433 sshd[9277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:10.433171 systemd-logind[1459]: New session 88 of user core. Sep 13 00:14:10.438100 systemd[1]: Started session-88.scope - Session 88 of User core. Sep 13 00:14:11.187115 sshd[9277]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:11.191976 systemd[1]: sshd@94-188.245.230.74:22-147.75.109.163:42038.service: Deactivated successfully. Sep 13 00:14:11.195360 systemd[1]: session-88.scope: Deactivated successfully. Sep 13 00:14:11.198501 systemd-logind[1459]: Session 88 logged out. Waiting for processes to exit. Sep 13 00:14:11.200683 systemd-logind[1459]: Removed session 88. Sep 13 00:14:16.366361 systemd[1]: Started sshd@95-188.245.230.74:22-147.75.109.163:40648.service - OpenSSH per-connection server daemon (147.75.109.163:40648). Sep 13 00:14:17.349756 sshd[9289]: Accepted publickey for core from 147.75.109.163 port 40648 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:17.352427 sshd[9289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:17.357839 systemd-logind[1459]: New session 89 of user core. Sep 13 00:14:17.366209 systemd[1]: Started session-89.scope - Session 89 of User core. Sep 13 00:14:18.095230 sshd[9289]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:18.101102 systemd[1]: sshd@95-188.245.230.74:22-147.75.109.163:40648.service: Deactivated successfully. Sep 13 00:14:18.102761 systemd[1]: session-89.scope: Deactivated successfully. Sep 13 00:14:18.105677 systemd-logind[1459]: Session 89 logged out. Waiting for processes to exit. Sep 13 00:14:18.108069 systemd-logind[1459]: Removed session 89. Sep 13 00:14:23.270358 systemd[1]: Started sshd@96-188.245.230.74:22-147.75.109.163:39032.service - OpenSSH per-connection server daemon (147.75.109.163:39032). Sep 13 00:14:24.239101 sshd[9362]: Accepted publickey for core from 147.75.109.163 port 39032 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:24.241282 sshd[9362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:24.245969 systemd-logind[1459]: New session 90 of user core. Sep 13 00:14:24.253157 systemd[1]: Started session-90.scope - Session 90 of User core. Sep 13 00:14:24.989154 sshd[9362]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:24.995016 systemd[1]: sshd@96-188.245.230.74:22-147.75.109.163:39032.service: Deactivated successfully. Sep 13 00:14:24.998660 systemd[1]: session-90.scope: Deactivated successfully. Sep 13 00:14:25.000312 systemd-logind[1459]: Session 90 logged out. Waiting for processes to exit. Sep 13 00:14:25.001665 systemd-logind[1459]: Removed session 90. Sep 13 00:14:30.174395 systemd[1]: Started sshd@97-188.245.230.74:22-147.75.109.163:57636.service - OpenSSH per-connection server daemon (147.75.109.163:57636). Sep 13 00:14:31.153607 sshd[9374]: Accepted publickey for core from 147.75.109.163 port 57636 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:31.155448 sshd[9374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:31.161876 systemd-logind[1459]: New session 91 of user core. Sep 13 00:14:31.167260 systemd[1]: Started session-91.scope - Session 91 of User core. Sep 13 00:14:31.901318 sshd[9374]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:31.906503 systemd[1]: sshd@97-188.245.230.74:22-147.75.109.163:57636.service: Deactivated successfully. Sep 13 00:14:31.913453 systemd[1]: session-91.scope: Deactivated successfully. Sep 13 00:14:31.916445 systemd-logind[1459]: Session 91 logged out. Waiting for processes to exit. Sep 13 00:14:31.917776 systemd-logind[1459]: Removed session 91. Sep 13 00:14:34.628427 systemd[1]: Started sshd@98-188.245.230.74:22-195.62.49.203:53160.service - OpenSSH per-connection server daemon (195.62.49.203:53160). Sep 13 00:14:34.782041 sshd[9431]: Received disconnect from 195.62.49.203 port 53160:11: Bye Bye [preauth] Sep 13 00:14:34.782041 sshd[9431]: Disconnected from authenticating user root 195.62.49.203 port 53160 [preauth] Sep 13 00:14:34.786320 systemd[1]: sshd@98-188.245.230.74:22-195.62.49.203:53160.service: Deactivated successfully. Sep 13 00:14:37.080272 systemd[1]: Started sshd@99-188.245.230.74:22-147.75.109.163:57650.service - OpenSSH per-connection server daemon (147.75.109.163:57650). Sep 13 00:14:38.075071 sshd[9437]: Accepted publickey for core from 147.75.109.163 port 57650 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:38.078024 sshd[9437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:38.083098 systemd-logind[1459]: New session 92 of user core. Sep 13 00:14:38.090134 systemd[1]: Started session-92.scope - Session 92 of User core. Sep 13 00:14:38.834238 sshd[9437]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:38.839249 systemd[1]: sshd@99-188.245.230.74:22-147.75.109.163:57650.service: Deactivated successfully. Sep 13 00:14:38.842274 systemd[1]: session-92.scope: Deactivated successfully. Sep 13 00:14:38.843985 systemd-logind[1459]: Session 92 logged out. Waiting for processes to exit. Sep 13 00:14:38.845205 systemd-logind[1459]: Removed session 92. Sep 13 00:14:44.014787 systemd[1]: Started sshd@100-188.245.230.74:22-147.75.109.163:60170.service - OpenSSH per-connection server daemon (147.75.109.163:60170). Sep 13 00:14:44.995230 sshd[9450]: Accepted publickey for core from 147.75.109.163 port 60170 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:44.997225 sshd[9450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:45.002580 systemd-logind[1459]: New session 93 of user core. Sep 13 00:14:45.009225 systemd[1]: Started session-93.scope - Session 93 of User core. Sep 13 00:14:45.750132 sshd[9450]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:45.755130 systemd[1]: sshd@100-188.245.230.74:22-147.75.109.163:60170.service: Deactivated successfully. Sep 13 00:14:45.757908 systemd[1]: session-93.scope: Deactivated successfully. Sep 13 00:14:45.758783 systemd-logind[1459]: Session 93 logged out. Waiting for processes to exit. Sep 13 00:14:45.759990 systemd-logind[1459]: Removed session 93. Sep 13 00:14:50.933522 systemd[1]: Started sshd@101-188.245.230.74:22-147.75.109.163:37876.service - OpenSSH per-connection server daemon (147.75.109.163:37876). Sep 13 00:14:51.981490 sshd[9508]: Accepted publickey for core from 147.75.109.163 port 37876 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:51.983138 sshd[9508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:51.988626 systemd-logind[1459]: New session 94 of user core. Sep 13 00:14:51.994143 systemd[1]: Started session-94.scope - Session 94 of User core. Sep 13 00:14:52.770831 sshd[9508]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:52.776983 systemd[1]: sshd@101-188.245.230.74:22-147.75.109.163:37876.service: Deactivated successfully. Sep 13 00:14:52.777645 systemd-logind[1459]: Session 94 logged out. Waiting for processes to exit. Sep 13 00:14:52.779436 systemd[1]: session-94.scope: Deactivated successfully. Sep 13 00:14:52.780754 systemd-logind[1459]: Removed session 94. Sep 13 00:14:57.950355 systemd[1]: Started sshd@102-188.245.230.74:22-147.75.109.163:37886.service - OpenSSH per-connection server daemon (147.75.109.163:37886). Sep 13 00:14:58.953125 sshd[9522]: Accepted publickey for core from 147.75.109.163 port 37886 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:14:58.955379 sshd[9522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:14:58.961050 systemd-logind[1459]: New session 95 of user core. Sep 13 00:14:58.964182 systemd[1]: Started session-95.scope - Session 95 of User core. Sep 13 00:14:59.709742 sshd[9522]: pam_unix(sshd:session): session closed for user core Sep 13 00:14:59.715457 systemd[1]: sshd@102-188.245.230.74:22-147.75.109.163:37886.service: Deactivated successfully. Sep 13 00:14:59.717624 systemd[1]: session-95.scope: Deactivated successfully. Sep 13 00:14:59.718999 systemd-logind[1459]: Session 95 logged out. Waiting for processes to exit. Sep 13 00:14:59.721013 systemd-logind[1459]: Removed session 95. Sep 13 00:15:04.889390 systemd[1]: Started sshd@103-188.245.230.74:22-147.75.109.163:46878.service - OpenSSH per-connection server daemon (147.75.109.163:46878). Sep 13 00:15:05.860157 sshd[9556]: Accepted publickey for core from 147.75.109.163 port 46878 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:15:05.862242 sshd[9556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:05.867401 systemd-logind[1459]: New session 96 of user core. Sep 13 00:15:05.876211 systemd[1]: Started session-96.scope - Session 96 of User core. Sep 13 00:15:06.609282 sshd[9556]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:06.614000 systemd[1]: sshd@103-188.245.230.74:22-147.75.109.163:46878.service: Deactivated successfully. Sep 13 00:15:06.616691 systemd[1]: session-96.scope: Deactivated successfully. Sep 13 00:15:06.617709 systemd-logind[1459]: Session 96 logged out. Waiting for processes to exit. Sep 13 00:15:06.619850 systemd-logind[1459]: Removed session 96. Sep 13 00:15:11.793164 systemd[1]: Started sshd@104-188.245.230.74:22-147.75.109.163:58442.service - OpenSSH per-connection server daemon (147.75.109.163:58442). Sep 13 00:15:12.781871 sshd[9589]: Accepted publickey for core from 147.75.109.163 port 58442 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:15:12.784253 sshd[9589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:12.789553 systemd-logind[1459]: New session 97 of user core. Sep 13 00:15:12.795063 systemd[1]: Started session-97.scope - Session 97 of User core. Sep 13 00:15:13.546869 sshd[9589]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:13.551577 systemd-logind[1459]: Session 97 logged out. Waiting for processes to exit. Sep 13 00:15:13.551588 systemd[1]: sshd@104-188.245.230.74:22-147.75.109.163:58442.service: Deactivated successfully. Sep 13 00:15:13.554222 systemd[1]: session-97.scope: Deactivated successfully. Sep 13 00:15:13.556085 systemd-logind[1459]: Removed session 97. Sep 13 00:15:17.705871 systemd[1]: run-containerd-runc-k8s.io-761e4327ae30eb45e7b249829f616931eec3cae86f3a4ed87b69006e1cb10ad4-runc.sFamK8.mount: Deactivated successfully. Sep 13 00:15:18.721193 systemd[1]: Started sshd@105-188.245.230.74:22-147.75.109.163:58454.service - OpenSSH per-connection server daemon (147.75.109.163:58454). Sep 13 00:15:19.707695 sshd[9642]: Accepted publickey for core from 147.75.109.163 port 58454 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:15:19.710175 sshd[9642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:19.715705 systemd-logind[1459]: New session 98 of user core. Sep 13 00:15:19.722199 systemd[1]: Started session-98.scope - Session 98 of User core. Sep 13 00:15:20.457010 sshd[9642]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:20.463599 systemd[1]: sshd@105-188.245.230.74:22-147.75.109.163:58454.service: Deactivated successfully. Sep 13 00:15:20.466994 systemd[1]: session-98.scope: Deactivated successfully. Sep 13 00:15:20.468835 systemd-logind[1459]: Session 98 logged out. Waiting for processes to exit. Sep 13 00:15:20.469997 systemd-logind[1459]: Removed session 98. Sep 13 00:15:25.636369 systemd[1]: Started sshd@106-188.245.230.74:22-147.75.109.163:46474.service - OpenSSH per-connection server daemon (147.75.109.163:46474). Sep 13 00:15:26.622635 sshd[9676]: Accepted publickey for core from 147.75.109.163 port 46474 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:15:26.623850 sshd[9676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:26.630444 systemd-logind[1459]: New session 99 of user core. Sep 13 00:15:26.640195 systemd[1]: Started session-99.scope - Session 99 of User core. Sep 13 00:15:27.386981 sshd[9676]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:27.392519 systemd[1]: sshd@106-188.245.230.74:22-147.75.109.163:46474.service: Deactivated successfully. Sep 13 00:15:27.396203 systemd[1]: session-99.scope: Deactivated successfully. Sep 13 00:15:27.397114 systemd-logind[1459]: Session 99 logged out. Waiting for processes to exit. Sep 13 00:15:27.398856 systemd-logind[1459]: Removed session 99. Sep 13 00:15:32.567277 systemd[1]: Started sshd@107-188.245.230.74:22-147.75.109.163:33866.service - OpenSSH per-connection server daemon (147.75.109.163:33866). Sep 13 00:15:32.998309 systemd[1]: Started sshd@108-188.245.230.74:22-195.62.49.203:40806.service - OpenSSH per-connection server daemon (195.62.49.203:40806). Sep 13 00:15:33.183931 sshd[9713]: Received disconnect from 195.62.49.203 port 40806:11: Bye Bye [preauth] Sep 13 00:15:33.184532 sshd[9713]: Disconnected from authenticating user root 195.62.49.203 port 40806 [preauth] Sep 13 00:15:33.187676 systemd[1]: sshd@108-188.245.230.74:22-195.62.49.203:40806.service: Deactivated successfully. Sep 13 00:15:33.566923 sshd[9709]: Accepted publickey for core from 147.75.109.163 port 33866 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:15:33.569512 sshd[9709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:33.576924 systemd-logind[1459]: New session 100 of user core. Sep 13 00:15:33.586173 systemd[1]: Started session-100.scope - Session 100 of User core. Sep 13 00:15:34.332061 sshd[9709]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:34.337089 systemd-logind[1459]: Session 100 logged out. Waiting for processes to exit. Sep 13 00:15:34.337407 systemd[1]: sshd@107-188.245.230.74:22-147.75.109.163:33866.service: Deactivated successfully. Sep 13 00:15:34.340886 systemd[1]: session-100.scope: Deactivated successfully. Sep 13 00:15:34.342508 systemd-logind[1459]: Removed session 100. Sep 13 00:15:39.506313 systemd[1]: Started sshd@109-188.245.230.74:22-147.75.109.163:33882.service - OpenSSH per-connection server daemon (147.75.109.163:33882). Sep 13 00:15:40.504069 sshd[9728]: Accepted publickey for core from 147.75.109.163 port 33882 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:15:40.505945 sshd[9728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:40.510756 systemd-logind[1459]: New session 101 of user core. Sep 13 00:15:40.515153 systemd[1]: Started session-101.scope - Session 101 of User core. Sep 13 00:15:41.262164 sshd[9728]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:41.267823 systemd[1]: sshd@109-188.245.230.74:22-147.75.109.163:33882.service: Deactivated successfully. Sep 13 00:15:41.271105 systemd[1]: session-101.scope: Deactivated successfully. Sep 13 00:15:41.273291 systemd-logind[1459]: Session 101 logged out. Waiting for processes to exit. Sep 13 00:15:41.274931 systemd-logind[1459]: Removed session 101. Sep 13 00:15:46.436357 systemd[1]: Started sshd@110-188.245.230.74:22-147.75.109.163:45344.service - OpenSSH per-connection server daemon (147.75.109.163:45344). Sep 13 00:15:47.428280 sshd[9743]: Accepted publickey for core from 147.75.109.163 port 45344 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:15:47.429621 sshd[9743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:47.434931 systemd-logind[1459]: New session 102 of user core. Sep 13 00:15:47.441268 systemd[1]: Started session-102.scope - Session 102 of User core. Sep 13 00:15:48.189106 sshd[9743]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:48.195994 systemd[1]: sshd@110-188.245.230.74:22-147.75.109.163:45344.service: Deactivated successfully. Sep 13 00:15:48.199426 systemd[1]: session-102.scope: Deactivated successfully. Sep 13 00:15:48.200804 systemd-logind[1459]: Session 102 logged out. Waiting for processes to exit. Sep 13 00:15:48.201970 systemd-logind[1459]: Removed session 102. Sep 13 00:15:53.366190 systemd[1]: Started sshd@111-188.245.230.74:22-147.75.109.163:50124.service - OpenSSH per-connection server daemon (147.75.109.163:50124). Sep 13 00:15:54.343809 sshd[9797]: Accepted publickey for core from 147.75.109.163 port 50124 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:15:54.345756 sshd[9797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:54.351285 systemd-logind[1459]: New session 103 of user core. Sep 13 00:15:54.358483 systemd[1]: Started session-103.scope - Session 103 of User core. Sep 13 00:15:55.144773 sshd[9797]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:55.149928 systemd[1]: sshd@111-188.245.230.74:22-147.75.109.163:50124.service: Deactivated successfully. Sep 13 00:15:55.152852 systemd[1]: session-103.scope: Deactivated successfully. Sep 13 00:15:55.154809 systemd-logind[1459]: Session 103 logged out. Waiting for processes to exit. Sep 13 00:15:55.155804 systemd-logind[1459]: Removed session 103. Sep 13 00:16:00.314468 systemd[1]: Started sshd@112-188.245.230.74:22-147.75.109.163:52108.service - OpenSSH per-connection server daemon (147.75.109.163:52108). Sep 13 00:16:01.303470 sshd[9809]: Accepted publickey for core from 147.75.109.163 port 52108 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:01.306007 sshd[9809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:01.310464 systemd-logind[1459]: New session 104 of user core. Sep 13 00:16:01.316152 systemd[1]: Started session-104.scope - Session 104 of User core. Sep 13 00:16:01.486849 systemd[1]: run-containerd-runc-k8s.io-83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0-runc.XLAQ7O.mount: Deactivated successfully. Sep 13 00:16:02.049461 sshd[9809]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:02.055145 systemd[1]: sshd@112-188.245.230.74:22-147.75.109.163:52108.service: Deactivated successfully. Sep 13 00:16:02.061250 systemd[1]: session-104.scope: Deactivated successfully. Sep 13 00:16:02.062206 systemd-logind[1459]: Session 104 logged out. Waiting for processes to exit. Sep 13 00:16:02.064585 systemd-logind[1459]: Removed session 104. Sep 13 00:16:07.236477 systemd[1]: Started sshd@113-188.245.230.74:22-147.75.109.163:52120.service - OpenSSH per-connection server daemon (147.75.109.163:52120). Sep 13 00:16:08.225454 sshd[9843]: Accepted publickey for core from 147.75.109.163 port 52120 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:08.227392 sshd[9843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:08.232828 systemd-logind[1459]: New session 105 of user core. Sep 13 00:16:08.240207 systemd[1]: Started session-105.scope - Session 105 of User core. Sep 13 00:16:08.986942 sshd[9843]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:08.991611 systemd[1]: sshd@113-188.245.230.74:22-147.75.109.163:52120.service: Deactivated successfully. Sep 13 00:16:08.996886 systemd[1]: session-105.scope: Deactivated successfully. Sep 13 00:16:08.999218 systemd-logind[1459]: Session 105 logged out. Waiting for processes to exit. Sep 13 00:16:09.001354 systemd-logind[1459]: Removed session 105. Sep 13 00:16:14.166372 systemd[1]: Started sshd@114-188.245.230.74:22-147.75.109.163:49096.service - OpenSSH per-connection server daemon (147.75.109.163:49096). Sep 13 00:16:15.146768 sshd[9900]: Accepted publickey for core from 147.75.109.163 port 49096 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:15.148939 sshd[9900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:15.155290 systemd-logind[1459]: New session 106 of user core. Sep 13 00:16:15.160150 systemd[1]: Started session-106.scope - Session 106 of User core. Sep 13 00:16:15.903181 sshd[9900]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:15.908768 systemd[1]: sshd@114-188.245.230.74:22-147.75.109.163:49096.service: Deactivated successfully. Sep 13 00:16:15.911969 systemd[1]: session-106.scope: Deactivated successfully. Sep 13 00:16:15.913790 systemd-logind[1459]: Session 106 logged out. Waiting for processes to exit. Sep 13 00:16:15.915533 systemd-logind[1459]: Removed session 106. Sep 13 00:16:21.077001 systemd[1]: Started sshd@115-188.245.230.74:22-147.75.109.163:41326.service - OpenSSH per-connection server daemon (147.75.109.163:41326). Sep 13 00:16:22.052228 sshd[9968]: Accepted publickey for core from 147.75.109.163 port 41326 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:22.054350 sshd[9968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:22.060628 systemd-logind[1459]: New session 107 of user core. Sep 13 00:16:22.065113 systemd[1]: Started session-107.scope - Session 107 of User core. Sep 13 00:16:22.797759 sshd[9968]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:22.803873 systemd[1]: sshd@115-188.245.230.74:22-147.75.109.163:41326.service: Deactivated successfully. Sep 13 00:16:22.805926 systemd[1]: session-107.scope: Deactivated successfully. Sep 13 00:16:22.807414 systemd-logind[1459]: Session 107 logged out. Waiting for processes to exit. Sep 13 00:16:22.808452 systemd-logind[1459]: Removed session 107. Sep 13 00:16:27.973404 systemd[1]: Started sshd@116-188.245.230.74:22-147.75.109.163:41336.service - OpenSSH per-connection server daemon (147.75.109.163:41336). Sep 13 00:16:28.942763 sshd[9983]: Accepted publickey for core from 147.75.109.163 port 41336 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:28.945974 sshd[9983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:28.953036 systemd-logind[1459]: New session 108 of user core. Sep 13 00:16:28.959159 systemd[1]: Started session-108.scope - Session 108 of User core. Sep 13 00:16:29.688999 sshd[9983]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:29.694268 systemd[1]: sshd@116-188.245.230.74:22-147.75.109.163:41336.service: Deactivated successfully. Sep 13 00:16:29.696764 systemd[1]: session-108.scope: Deactivated successfully. Sep 13 00:16:29.699261 systemd-logind[1459]: Session 108 logged out. Waiting for processes to exit. Sep 13 00:16:29.700242 systemd-logind[1459]: Removed session 108. Sep 13 00:16:31.701389 systemd[1]: Started sshd@117-188.245.230.74:22-195.62.49.203:51398.service - OpenSSH per-connection server daemon (195.62.49.203:51398). Sep 13 00:16:31.885108 sshd[10016]: Received disconnect from 195.62.49.203 port 51398:11: Bye Bye [preauth] Sep 13 00:16:31.885108 sshd[10016]: Disconnected from authenticating user root 195.62.49.203 port 51398 [preauth] Sep 13 00:16:31.889012 systemd[1]: sshd@117-188.245.230.74:22-195.62.49.203:51398.service: Deactivated successfully. Sep 13 00:16:34.863351 systemd[1]: Started sshd@118-188.245.230.74:22-147.75.109.163:38664.service - OpenSSH per-connection server daemon (147.75.109.163:38664). Sep 13 00:16:35.832448 sshd[10021]: Accepted publickey for core from 147.75.109.163 port 38664 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:35.834644 sshd[10021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:35.839824 systemd-logind[1459]: New session 109 of user core. Sep 13 00:16:35.844098 systemd[1]: Started session-109.scope - Session 109 of User core. Sep 13 00:16:36.584452 sshd[10021]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:36.590369 systemd[1]: sshd@118-188.245.230.74:22-147.75.109.163:38664.service: Deactivated successfully. Sep 13 00:16:36.593110 systemd[1]: session-109.scope: Deactivated successfully. Sep 13 00:16:36.594245 systemd-logind[1459]: Session 109 logged out. Waiting for processes to exit. Sep 13 00:16:36.595307 systemd-logind[1459]: Removed session 109. Sep 13 00:16:41.768418 systemd[1]: Started sshd@119-188.245.230.74:22-147.75.109.163:35500.service - OpenSSH per-connection server daemon (147.75.109.163:35500). Sep 13 00:16:42.767881 sshd[10034]: Accepted publickey for core from 147.75.109.163 port 35500 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:42.770299 sshd[10034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:42.776667 systemd-logind[1459]: New session 110 of user core. Sep 13 00:16:42.782268 systemd[1]: Started session-110.scope - Session 110 of User core. Sep 13 00:16:43.524360 sshd[10034]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:43.529491 systemd[1]: sshd@119-188.245.230.74:22-147.75.109.163:35500.service: Deactivated successfully. Sep 13 00:16:43.529694 systemd-logind[1459]: Session 110 logged out. Waiting for processes to exit. Sep 13 00:16:43.532361 systemd[1]: session-110.scope: Deactivated successfully. Sep 13 00:16:43.534171 systemd-logind[1459]: Removed session 110. Sep 13 00:16:48.701228 systemd[1]: Started sshd@120-188.245.230.74:22-147.75.109.163:35514.service - OpenSSH per-connection server daemon (147.75.109.163:35514). Sep 13 00:16:49.678640 sshd[10087]: Accepted publickey for core from 147.75.109.163 port 35514 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:49.681250 sshd[10087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:49.688875 systemd-logind[1459]: New session 111 of user core. Sep 13 00:16:49.695260 systemd[1]: Started session-111.scope - Session 111 of User core. Sep 13 00:16:50.429233 sshd[10087]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:50.434987 systemd[1]: sshd@120-188.245.230.74:22-147.75.109.163:35514.service: Deactivated successfully. Sep 13 00:16:50.438765 systemd[1]: session-111.scope: Deactivated successfully. Sep 13 00:16:50.442425 systemd-logind[1459]: Session 111 logged out. Waiting for processes to exit. Sep 13 00:16:50.443502 systemd-logind[1459]: Removed session 111. Sep 13 00:16:55.614590 systemd[1]: Started sshd@121-188.245.230.74:22-147.75.109.163:34452.service - OpenSSH per-connection server daemon (147.75.109.163:34452). Sep 13 00:16:55.620180 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Sep 13 00:16:55.655389 systemd-tmpfiles[10103]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:16:55.656197 systemd-tmpfiles[10103]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:16:55.659053 systemd-tmpfiles[10103]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:16:55.659300 systemd-tmpfiles[10103]: ACLs are not supported, ignoring. Sep 13 00:16:55.659353 systemd-tmpfiles[10103]: ACLs are not supported, ignoring. Sep 13 00:16:55.664014 systemd-tmpfiles[10103]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:16:55.664027 systemd-tmpfiles[10103]: Skipping /boot Sep 13 00:16:55.676367 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Sep 13 00:16:55.677880 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Sep 13 00:16:56.666933 sshd[10102]: Accepted publickey for core from 147.75.109.163 port 34452 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:16:56.668775 sshd[10102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:16:56.675469 systemd-logind[1459]: New session 112 of user core. Sep 13 00:16:56.682257 systemd[1]: Started session-112.scope - Session 112 of User core. Sep 13 00:16:57.470693 sshd[10102]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:57.475522 systemd[1]: sshd@121-188.245.230.74:22-147.75.109.163:34452.service: Deactivated successfully. Sep 13 00:16:57.478624 systemd[1]: session-112.scope: Deactivated successfully. Sep 13 00:16:57.479946 systemd-logind[1459]: Session 112 logged out. Waiting for processes to exit. Sep 13 00:16:57.481254 systemd-logind[1459]: Removed session 112. Sep 13 00:17:02.666440 systemd[1]: Started sshd@122-188.245.230.74:22-147.75.109.163:34478.service - OpenSSH per-connection server daemon (147.75.109.163:34478). Sep 13 00:17:03.669689 sshd[10140]: Accepted publickey for core from 147.75.109.163 port 34478 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:17:03.672093 sshd[10140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:03.677765 systemd-logind[1459]: New session 113 of user core. Sep 13 00:17:03.684255 systemd[1]: Started session-113.scope - Session 113 of User core. Sep 13 00:17:04.438428 sshd[10140]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:04.445360 systemd[1]: sshd@122-188.245.230.74:22-147.75.109.163:34478.service: Deactivated successfully. Sep 13 00:17:04.447634 systemd[1]: session-113.scope: Deactivated successfully. Sep 13 00:17:04.448293 systemd-logind[1459]: Session 113 logged out. Waiting for processes to exit. Sep 13 00:17:04.449136 systemd-logind[1459]: Removed session 113. Sep 13 00:17:09.616330 systemd[1]: Started sshd@123-188.245.230.74:22-147.75.109.163:34494.service - OpenSSH per-connection server daemon (147.75.109.163:34494). Sep 13 00:17:10.606678 sshd[10179]: Accepted publickey for core from 147.75.109.163 port 34494 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:17:10.609049 sshd[10179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:10.614098 systemd-logind[1459]: New session 114 of user core. Sep 13 00:17:10.621131 systemd[1]: Started session-114.scope - Session 114 of User core. Sep 13 00:17:11.367276 sshd[10179]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:11.372545 systemd[1]: sshd@123-188.245.230.74:22-147.75.109.163:34494.service: Deactivated successfully. Sep 13 00:17:11.376779 systemd[1]: session-114.scope: Deactivated successfully. Sep 13 00:17:11.377854 systemd-logind[1459]: Session 114 logged out. Waiting for processes to exit. Sep 13 00:17:11.379248 systemd-logind[1459]: Removed session 114. Sep 13 00:17:16.545938 systemd[1]: Started sshd@124-188.245.230.74:22-147.75.109.163:59024.service - OpenSSH per-connection server daemon (147.75.109.163:59024). Sep 13 00:17:17.527254 sshd[10193]: Accepted publickey for core from 147.75.109.163 port 59024 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:17:17.529454 sshd[10193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:17.535024 systemd-logind[1459]: New session 115 of user core. Sep 13 00:17:17.541201 systemd[1]: Started session-115.scope - Session 115 of User core. Sep 13 00:17:18.292509 sshd[10193]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:18.297269 systemd[1]: sshd@124-188.245.230.74:22-147.75.109.163:59024.service: Deactivated successfully. Sep 13 00:17:18.299546 systemd[1]: session-115.scope: Deactivated successfully. Sep 13 00:17:18.303815 systemd-logind[1459]: Session 115 logged out. Waiting for processes to exit. Sep 13 00:17:18.306889 systemd-logind[1459]: Removed session 115. Sep 13 00:17:23.468004 systemd[1]: Started sshd@125-188.245.230.74:22-147.75.109.163:52904.service - OpenSSH per-connection server daemon (147.75.109.163:52904). Sep 13 00:17:24.441478 sshd[10267]: Accepted publickey for core from 147.75.109.163 port 52904 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:17:24.445188 sshd[10267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:24.451778 systemd-logind[1459]: New session 116 of user core. Sep 13 00:17:24.454525 systemd[1]: Started session-116.scope - Session 116 of User core. Sep 13 00:17:25.193691 sshd[10267]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:25.198384 systemd[1]: sshd@125-188.245.230.74:22-147.75.109.163:52904.service: Deactivated successfully. Sep 13 00:17:25.201174 systemd[1]: session-116.scope: Deactivated successfully. Sep 13 00:17:25.202872 systemd-logind[1459]: Session 116 logged out. Waiting for processes to exit. Sep 13 00:17:25.204931 systemd-logind[1459]: Removed session 116. Sep 13 00:17:30.366204 systemd[1]: Started sshd@126-188.245.230.74:22-147.75.109.163:36898.service - OpenSSH per-connection server daemon (147.75.109.163:36898). Sep 13 00:17:31.346666 sshd[10280]: Accepted publickey for core from 147.75.109.163 port 36898 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:17:31.348694 sshd[10280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:31.354169 systemd-logind[1459]: New session 117 of user core. Sep 13 00:17:31.361222 systemd[1]: Started session-117.scope - Session 117 of User core. Sep 13 00:17:31.480224 systemd[1]: run-containerd-runc-k8s.io-83100cd5d1a4820f14010c1743d8f28db12d3835bdd44fbf9212b51181c879a0-runc.1wUcew.mount: Deactivated successfully. Sep 13 00:17:32.093653 sshd[10280]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:32.098222 systemd[1]: sshd@126-188.245.230.74:22-147.75.109.163:36898.service: Deactivated successfully. Sep 13 00:17:32.100630 systemd[1]: session-117.scope: Deactivated successfully. Sep 13 00:17:32.101538 systemd-logind[1459]: Session 117 logged out. Waiting for processes to exit. Sep 13 00:17:32.103130 systemd-logind[1459]: Removed session 117. Sep 13 00:17:32.172208 systemd[1]: Started sshd@127-188.245.230.74:22-195.62.49.203:57302.service - OpenSSH per-connection server daemon (195.62.49.203:57302). Sep 13 00:17:32.358866 sshd[10315]: Received disconnect from 195.62.49.203 port 57302:11: Bye Bye [preauth] Sep 13 00:17:32.358866 sshd[10315]: Disconnected from authenticating user root 195.62.49.203 port 57302 [preauth] Sep 13 00:17:32.362381 systemd[1]: sshd@127-188.245.230.74:22-195.62.49.203:57302.service: Deactivated successfully. Sep 13 00:17:37.267997 systemd[1]: Started sshd@128-188.245.230.74:22-147.75.109.163:36902.service - OpenSSH per-connection server daemon (147.75.109.163:36902). Sep 13 00:17:38.269238 sshd[10320]: Accepted publickey for core from 147.75.109.163 port 36902 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:17:38.272109 sshd[10320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:38.277591 systemd-logind[1459]: New session 118 of user core. Sep 13 00:17:38.283119 systemd[1]: Started session-118.scope - Session 118 of User core. Sep 13 00:17:39.027840 sshd[10320]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:39.033749 systemd[1]: sshd@128-188.245.230.74:22-147.75.109.163:36902.service: Deactivated successfully. Sep 13 00:17:39.037064 systemd[1]: session-118.scope: Deactivated successfully. Sep 13 00:17:39.038694 systemd-logind[1459]: Session 118 logged out. Waiting for processes to exit. Sep 13 00:17:39.039967 systemd-logind[1459]: Removed session 118. Sep 13 00:17:44.213405 systemd[1]: Started sshd@129-188.245.230.74:22-147.75.109.163:57968.service - OpenSSH per-connection server daemon (147.75.109.163:57968). Sep 13 00:17:45.204471 sshd[10354]: Accepted publickey for core from 147.75.109.163 port 57968 ssh2: RSA SHA256:bk/7TLrptUsRlsRU8kT0ooDVsm6tbA2jrK7QjRZsxaM Sep 13 00:17:45.206334 sshd[10354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:45.213467 systemd-logind[1459]: New session 119 of user core. Sep 13 00:17:45.219159 systemd[1]: Started session-119.scope - Session 119 of User core. Sep 13 00:17:45.962612 sshd[10354]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:45.968210 systemd[1]: sshd@129-188.245.230.74:22-147.75.109.163:57968.service: Deactivated successfully. Sep 13 00:17:45.972117 systemd[1]: session-119.scope: Deactivated successfully. Sep 13 00:17:45.972985 systemd-logind[1459]: Session 119 logged out. Waiting for processes to exit. Sep 13 00:17:45.974215 systemd-logind[1459]: Removed session 119. Sep 13 00:18:00.412039 systemd[1]: cri-containerd-2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701.scope: Deactivated successfully. Sep 13 00:18:00.413425 systemd[1]: cri-containerd-2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701.scope: Consumed 43.482s CPU time. Sep 13 00:18:00.435481 containerd[1482]: time="2025-09-13T00:18:00.435388900Z" level=info msg="shim disconnected" id=2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701 namespace=k8s.io Sep 13 00:18:00.435481 containerd[1482]: time="2025-09-13T00:18:00.435456741Z" level=warning msg="cleaning up after shim disconnected" id=2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701 namespace=k8s.io Sep 13 00:18:00.435481 containerd[1482]: time="2025-09-13T00:18:00.435467621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:00.440992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701-rootfs.mount: Deactivated successfully. Sep 13 00:18:00.884632 kubelet[2551]: E0913 00:18:00.884546 2551 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34470->10.0.0.2:2379: read: connection timed out" Sep 13 00:18:00.890233 systemd[1]: cri-containerd-7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e.scope: Deactivated successfully. Sep 13 00:18:00.890666 systemd[1]: cri-containerd-7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e.scope: Consumed 3.904s CPU time, 15.6M memory peak, 0B memory swap peak. Sep 13 00:18:00.913494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e-rootfs.mount: Deactivated successfully. Sep 13 00:18:00.916289 containerd[1482]: time="2025-09-13T00:18:00.916218033Z" level=info msg="shim disconnected" id=7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e namespace=k8s.io Sep 13 00:18:00.916289 containerd[1482]: time="2025-09-13T00:18:00.916284394Z" level=warning msg="cleaning up after shim disconnected" id=7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e namespace=k8s.io Sep 13 00:18:00.916289 containerd[1482]: time="2025-09-13T00:18:00.916293714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:00.966633 kubelet[2551]: I0913 00:18:00.965131 2551 scope.go:117] "RemoveContainer" containerID="7e1284c7134c29e7e5dc7847c829a93a38b3d31d82c720064714adafd783292e" Sep 13 00:18:00.973219 systemd[1]: cri-containerd-03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081.scope: Deactivated successfully. Sep 13 00:18:00.974625 systemd[1]: cri-containerd-03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081.scope: Consumed 11.243s CPU time, 20.2M memory peak, 0B memory swap peak. Sep 13 00:18:00.978928 containerd[1482]: time="2025-09-13T00:18:00.978748425Z" level=info msg="CreateContainer within sandbox \"5ab2dd2378d293f378d9581591c81cf6df51b435b89cfa2887c9a619ad311a67\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:18:00.982594 kubelet[2551]: I0913 00:18:00.982448 2551 scope.go:117] "RemoveContainer" containerID="2709f4e0afd6e6f9071eb46f84220d0ddea3ac73fa533a46d1bd0237e42e4701" Sep 13 00:18:01.002769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220415510.mount: Deactivated successfully. Sep 13 00:18:01.003926 containerd[1482]: time="2025-09-13T00:18:01.003284303Z" level=info msg="CreateContainer within sandbox \"b0b06f04c1bfd023914e4dd6beaceb043371f01c4e1bda0fa7c227afc9f4c9b5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:18:01.009551 containerd[1482]: time="2025-09-13T00:18:01.009387352Z" level=info msg="CreateContainer within sandbox \"5ab2dd2378d293f378d9581591c81cf6df51b435b89cfa2887c9a619ad311a67\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"39f739e26e9c5ddf36da9145d4ba4d25e4e99eae7d44aa9a0c019ad2c26514ad\"" Sep 13 00:18:01.011556 containerd[1482]: time="2025-09-13T00:18:01.010802452Z" level=info msg="StartContainer for \"39f739e26e9c5ddf36da9145d4ba4d25e4e99eae7d44aa9a0c019ad2c26514ad\"" Sep 13 00:18:01.020653 kubelet[2551]: E0913 00:18:01.016386 2551 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34288->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-d78c7abf5e.1864af75a6b93826 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-d78c7abf5e,UID:56023b2e468fed450a14df8478d6a818,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-d78c7abf5e,},FirstTimestamp:2025-09-13 00:17:54.424526886 +0000 UTC m=+908.431017305,LastTimestamp:2025-09-13 00:17:54.424526886 +0000 UTC m=+908.431017305,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-d78c7abf5e,}" Sep 13 00:18:01.033126 containerd[1482]: time="2025-09-13T00:18:01.032532729Z" level=info msg="shim disconnected" id=03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081 namespace=k8s.io Sep 13 00:18:01.033695 containerd[1482]: time="2025-09-13T00:18:01.033469023Z" level=warning msg="cleaning up after shim disconnected" id=03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081 namespace=k8s.io Sep 13 00:18:01.033695 containerd[1482]: time="2025-09-13T00:18:01.033553464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:01.036944 containerd[1482]: time="2025-09-13T00:18:01.035848098Z" level=info msg="CreateContainer within sandbox \"b0b06f04c1bfd023914e4dd6beaceb043371f01c4e1bda0fa7c227afc9f4c9b5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"42a95a2b02a78fb5bc944c765d203c510827c3664371df1a7fbb1c666d01205a\"" Sep 13 00:18:01.036944 containerd[1482]: time="2025-09-13T00:18:01.036357785Z" level=info msg="StartContainer for \"42a95a2b02a78fb5bc944c765d203c510827c3664371df1a7fbb1c666d01205a\"" Sep 13 00:18:01.063665 systemd[1]: Started cri-containerd-39f739e26e9c5ddf36da9145d4ba4d25e4e99eae7d44aa9a0c019ad2c26514ad.scope - libcontainer container 39f739e26e9c5ddf36da9145d4ba4d25e4e99eae7d44aa9a0c019ad2c26514ad. Sep 13 00:18:01.081203 systemd[1]: Started cri-containerd-42a95a2b02a78fb5bc944c765d203c510827c3664371df1a7fbb1c666d01205a.scope - libcontainer container 42a95a2b02a78fb5bc944c765d203c510827c3664371df1a7fbb1c666d01205a. Sep 13 00:18:01.120574 containerd[1482]: time="2025-09-13T00:18:01.120526653Z" level=info msg="StartContainer for \"42a95a2b02a78fb5bc944c765d203c510827c3664371df1a7fbb1c666d01205a\" returns successfully" Sep 13 00:18:01.125095 containerd[1482]: time="2025-09-13T00:18:01.125028398Z" level=info msg="StartContainer for \"39f739e26e9c5ddf36da9145d4ba4d25e4e99eae7d44aa9a0c019ad2c26514ad\" returns successfully" Sep 13 00:18:01.441243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466542380.mount: Deactivated successfully. Sep 13 00:18:01.441412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081-rootfs.mount: Deactivated successfully. Sep 13 00:18:01.987592 kubelet[2551]: I0913 00:18:01.987559 2551 scope.go:117] "RemoveContainer" containerID="03c9806d56d62a935e35384698f1dea9e4c3af0fbfd4be291b3cc4daf5850081" Sep 13 00:18:01.991817 containerd[1482]: time="2025-09-13T00:18:01.990566101Z" level=info msg="CreateContainer within sandbox \"6642337d1303a53e3a0e7d6286b91c2d62d4c93d9ecb7baaeb7dd523a154721c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:18:02.011668 containerd[1482]: time="2025-09-13T00:18:02.010538472Z" level=info msg="CreateContainer within sandbox \"6642337d1303a53e3a0e7d6286b91c2d62d4c93d9ecb7baaeb7dd523a154721c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"db9bbeee32b29dc99549e86ab30e89009155a00db713b21863ce451b1115f586\"" Sep 13 00:18:02.011392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311055563.mount: Deactivated successfully. Sep 13 00:18:02.011880 containerd[1482]: time="2025-09-13T00:18:02.011755170Z" level=info msg="StartContainer for \"db9bbeee32b29dc99549e86ab30e89009155a00db713b21863ce451b1115f586\"" Sep 13 00:18:02.040156 systemd[1]: Started cri-containerd-db9bbeee32b29dc99549e86ab30e89009155a00db713b21863ce451b1115f586.scope - libcontainer container db9bbeee32b29dc99549e86ab30e89009155a00db713b21863ce451b1115f586. Sep 13 00:18:02.074084 containerd[1482]: time="2025-09-13T00:18:02.074036798Z" level=info msg="StartContainer for \"db9bbeee32b29dc99549e86ab30e89009155a00db713b21863ce451b1115f586\" returns successfully"