Aug 13 00:15:21.418641 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:15:21.418697 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:15:21.418725 kernel: KASLR enabled Aug 13 00:15:21.418742 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Aug 13 00:15:21.418759 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Aug 13 00:15:21.418775 kernel: random: crng init done Aug 13 00:15:21.418795 kernel: ACPI: Early table checksum verification disabled Aug 13 00:15:21.418813 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Aug 13 00:15:21.418831 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:15:21.418851 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.418869 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.418887 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.418905 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.418922 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.418944 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.418967 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.418986 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.419005 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:15:21.419099 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Aug 13 00:15:21.419141 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Aug 13 00:15:21.419165 kernel: NUMA: Failed to initialise from firmware Aug 13 00:15:21.419184 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Aug 13 00:15:21.421317 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Aug 13 00:15:21.421351 kernel: Zone ranges: Aug 13 00:15:21.421371 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 13 00:15:21.421401 kernel: DMA32 empty Aug 13 00:15:21.421420 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Aug 13 00:15:21.421439 kernel: Movable zone start for each node Aug 13 00:15:21.421457 kernel: Early memory node ranges Aug 13 00:15:21.421476 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Aug 13 00:15:21.421495 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Aug 13 00:15:21.421513 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Aug 13 00:15:21.421532 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Aug 13 00:15:21.421550 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Aug 13 00:15:21.421569 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Aug 13 00:15:21.421587 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Aug 13 00:15:21.421607 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Aug 13 00:15:21.421629 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Aug 13 00:15:21.421648 kernel: psci: probing for conduit method from ACPI. Aug 13 00:15:21.421667 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:15:21.421693 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:15:21.421713 kernel: psci: Trusted OS migration not required Aug 13 00:15:21.421733 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:15:21.421762 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:15:21.421783 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:15:21.421803 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:15:21.421823 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:15:21.421843 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:15:21.421863 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:15:21.421882 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:15:21.421902 kernel: CPU features: detected: Spectre-v4 Aug 13 00:15:21.421922 kernel: CPU features: detected: Spectre-BHB Aug 13 00:15:21.421941 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:15:21.421965 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:15:21.421985 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:15:21.422004 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:15:21.422047 kernel: alternatives: applying boot alternatives Aug 13 00:15:21.422077 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:15:21.422099 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:15:21.422119 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:15:21.422139 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:15:21.422159 kernel: Fallback order for Node 0: 0 Aug 13 00:15:21.422180 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Aug 13 00:15:21.422217 kernel: Policy zone: Normal Aug 13 00:15:21.422248 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:15:21.422268 kernel: software IO TLB: area num 2. Aug 13 00:15:21.422288 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Aug 13 00:15:21.422310 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Aug 13 00:15:21.422330 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:15:21.422350 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:15:21.422371 kernel: rcu: RCU event tracing is enabled. Aug 13 00:15:21.422391 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:15:21.422411 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:15:21.422432 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:15:21.422452 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:15:21.422476 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:15:21.422497 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:15:21.422517 kernel: GICv3: 256 SPIs implemented Aug 13 00:15:21.422537 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:15:21.422557 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:15:21.422576 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:15:21.422596 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:15:21.422616 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:15:21.422636 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:15:21.422656 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:15:21.422676 kernel: GICv3: using LPI property table @0x00000001000e0000 Aug 13 00:15:21.422696 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Aug 13 00:15:21.422720 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:15:21.422740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:15:21.422760 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:15:21.422889 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:15:21.422915 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:15:21.422936 kernel: Console: colour dummy device 80x25 Aug 13 00:15:21.422957 kernel: ACPI: Core revision 20230628 Aug 13 00:15:21.422978 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:15:21.422998 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:15:21.423019 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:15:21.423064 kernel: landlock: Up and running. Aug 13 00:15:21.423085 kernel: SELinux: Initializing. Aug 13 00:15:21.423106 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:15:21.423127 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:15:21.423149 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:15:21.423173 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:15:21.423196 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:15:21.425293 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:15:21.425316 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:15:21.425347 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:15:21.425368 kernel: Remapping and enabling EFI services. Aug 13 00:15:21.425390 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:15:21.425411 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:15:21.425432 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:15:21.425453 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Aug 13 00:15:21.425473 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:15:21.425493 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:15:21.425513 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:15:21.425534 kernel: SMP: Total of 2 processors activated. Aug 13 00:15:21.425558 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:15:21.425579 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:15:21.425613 kernel: CPU features: detected: Common not Private translations Aug 13 00:15:21.425638 kernel: CPU features: detected: CRC32 instructions Aug 13 00:15:21.425659 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 13 00:15:21.425681 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:15:21.425702 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:15:21.425723 kernel: CPU features: detected: Privileged Access Never Aug 13 00:15:21.425746 kernel: CPU features: detected: RAS Extension Support Aug 13 00:15:21.425772 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:15:21.425795 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:15:21.425818 kernel: alternatives: applying system-wide alternatives Aug 13 00:15:21.425840 kernel: devtmpfs: initialized Aug 13 00:15:21.425861 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:15:21.425883 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:15:21.425904 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:15:21.425928 kernel: SMBIOS 3.0.0 present. Aug 13 00:15:21.425950 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Aug 13 00:15:21.425972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:15:21.425994 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:15:21.426015 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:15:21.426062 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:15:21.426108 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:15:21.426137 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Aug 13 00:15:21.426159 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:15:21.426187 kernel: cpuidle: using governor menu Aug 13 00:15:21.426245 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:15:21.426285 kernel: ASID allocator initialised with 32768 entries Aug 13 00:15:21.426311 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:15:21.426332 kernel: Serial: AMBA PL011 UART driver Aug 13 00:15:21.426362 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:15:21.426384 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:15:21.426406 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:15:21.426427 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:15:21.426456 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:15:21.426478 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:15:21.426499 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:15:21.426520 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:15:21.426542 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:15:21.426563 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:15:21.426584 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:15:21.426605 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:15:21.426627 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:15:21.426651 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:15:21.426673 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:15:21.426694 kernel: ACPI: Interpreter enabled Aug 13 00:15:21.426715 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:15:21.426737 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:15:21.426758 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:15:21.426780 kernel: printk: console [ttyAMA0] enabled Aug 13 00:15:21.426801 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:15:21.429801 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:15:21.430229 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:15:21.430447 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:15:21.430643 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:15:21.430835 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:15:21.430864 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:15:21.430887 kernel: PCI host bridge to bus 0000:00 Aug 13 00:15:21.431112 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:15:21.433462 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:15:21.433669 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:15:21.433846 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:15:21.434116 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:15:21.436195 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Aug 13 00:15:21.436467 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Aug 13 00:15:21.436684 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Aug 13 00:15:21.436890 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.437159 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Aug 13 00:15:21.437457 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.437655 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Aug 13 00:15:21.437885 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.438118 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Aug 13 00:15:21.438357 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.438556 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Aug 13 00:15:21.438761 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.438953 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Aug 13 00:15:21.441331 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.441608 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Aug 13 00:15:21.441830 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.442053 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Aug 13 00:15:21.442341 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.442564 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Aug 13 00:15:21.442774 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Aug 13 00:15:21.442978 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Aug 13 00:15:21.445294 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Aug 13 00:15:21.445548 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Aug 13 00:15:21.445775 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Aug 13 00:15:21.445980 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Aug 13 00:15:21.446259 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:15:21.446488 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Aug 13 00:15:21.446716 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 13 00:15:21.446919 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Aug 13 00:15:21.448463 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Aug 13 00:15:21.448712 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Aug 13 00:15:21.448913 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Aug 13 00:15:21.449264 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Aug 13 00:15:21.449486 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Aug 13 00:15:21.449725 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 13 00:15:21.449928 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Aug 13 00:15:21.450166 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Aug 13 00:15:21.450683 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Aug 13 00:15:21.451000 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Aug 13 00:15:21.451293 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Aug 13 00:15:21.451525 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Aug 13 00:15:21.451724 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Aug 13 00:15:21.451919 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Aug 13 00:15:21.452257 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Aug 13 00:15:21.452466 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Aug 13 00:15:21.452655 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Aug 13 00:15:21.452860 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Aug 13 00:15:21.453081 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Aug 13 00:15:21.453318 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Aug 13 00:15:21.453519 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 13 00:15:21.453714 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Aug 13 00:15:21.453903 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Aug 13 00:15:21.454128 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 13 00:15:21.454626 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Aug 13 00:15:21.454846 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Aug 13 00:15:21.455125 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 13 00:15:21.455384 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Aug 13 00:15:21.455582 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Aug 13 00:15:21.455778 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 00:15:21.455970 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Aug 13 00:15:21.456228 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Aug 13 00:15:21.456455 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 00:15:21.456649 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Aug 13 00:15:21.456839 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Aug 13 00:15:21.457071 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 00:15:21.459900 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Aug 13 00:15:21.460156 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Aug 13 00:15:21.460388 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 00:15:21.460581 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Aug 13 00:15:21.460782 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Aug 13 00:15:21.460975 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Aug 13 00:15:21.462957 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:15:21.463362 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Aug 13 00:15:21.463569 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:15:21.463766 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Aug 13 00:15:21.463969 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:15:21.464256 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Aug 13 00:15:21.465556 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:15:21.465756 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Aug 13 00:15:21.465945 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:15:21.466188 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Aug 13 00:15:21.469517 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:15:21.469763 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Aug 13 00:15:21.470054 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:15:21.470300 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Aug 13 00:15:21.470495 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:15:21.470689 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Aug 13 00:15:21.470876 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:15:21.471090 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Aug 13 00:15:21.471325 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Aug 13 00:15:21.471521 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Aug 13 00:15:21.471711 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Aug 13 00:15:21.471902 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Aug 13 00:15:21.472179 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Aug 13 00:15:21.474519 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Aug 13 00:15:21.474726 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Aug 13 00:15:21.474924 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Aug 13 00:15:21.475185 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Aug 13 00:15:21.477826 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Aug 13 00:15:21.478016 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Aug 13 00:15:21.478370 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Aug 13 00:15:21.478574 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Aug 13 00:15:21.478768 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Aug 13 00:15:21.478957 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Aug 13 00:15:21.479183 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Aug 13 00:15:21.479426 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Aug 13 00:15:21.479623 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Aug 13 00:15:21.479812 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Aug 13 00:15:21.480005 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Aug 13 00:15:21.482344 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Aug 13 00:15:21.482622 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:15:21.482827 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Aug 13 00:15:21.483047 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Aug 13 00:15:21.483388 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 13 00:15:21.483585 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Aug 13 00:15:21.483774 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:15:21.483982 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Aug 13 00:15:21.485401 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Aug 13 00:15:21.485644 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 13 00:15:21.485835 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Aug 13 00:15:21.486043 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:15:21.486319 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Aug 13 00:15:21.486551 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Aug 13 00:15:21.486746 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Aug 13 00:15:21.486935 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 13 00:15:21.488274 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Aug 13 00:15:21.488526 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:15:21.488740 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Aug 13 00:15:21.488934 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Aug 13 00:15:21.489171 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 13 00:15:21.491166 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Aug 13 00:15:21.491428 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:15:21.491645 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Aug 13 00:15:21.491854 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Aug 13 00:15:21.492077 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 13 00:15:21.492348 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Aug 13 00:15:21.492558 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:15:21.492761 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Aug 13 00:15:21.493534 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Aug 13 00:15:21.493752 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Aug 13 00:15:21.493941 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 13 00:15:21.494190 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Aug 13 00:15:21.494425 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:15:21.494637 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Aug 13 00:15:21.494887 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Aug 13 00:15:21.495116 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Aug 13 00:15:21.495348 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Aug 13 00:15:21.495541 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 13 00:15:21.495743 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Aug 13 00:15:21.495947 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:15:21.496249 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Aug 13 00:15:21.496453 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 13 00:15:21.496642 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Aug 13 00:15:21.496830 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:15:21.497042 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Aug 13 00:15:21.497553 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Aug 13 00:15:21.497772 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Aug 13 00:15:21.497968 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:15:21.498281 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:15:21.498469 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:15:21.498639 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:15:21.498834 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 13 00:15:21.499010 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Aug 13 00:15:21.499278 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Aug 13 00:15:21.499492 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Aug 13 00:15:21.499669 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Aug 13 00:15:21.499842 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Aug 13 00:15:21.500083 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Aug 13 00:15:21.500331 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Aug 13 00:15:21.500513 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Aug 13 00:15:21.500720 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Aug 13 00:15:21.500896 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Aug 13 00:15:21.501154 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Aug 13 00:15:21.503529 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Aug 13 00:15:21.503726 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Aug 13 00:15:21.503906 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Aug 13 00:15:21.504144 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Aug 13 00:15:21.505626 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Aug 13 00:15:21.505831 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Aug 13 00:15:21.506096 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Aug 13 00:15:21.506371 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Aug 13 00:15:21.506576 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Aug 13 00:15:21.506819 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Aug 13 00:15:21.507001 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Aug 13 00:15:21.510336 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Aug 13 00:15:21.510596 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Aug 13 00:15:21.510781 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Aug 13 00:15:21.510959 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Aug 13 00:15:21.511000 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:15:21.511045 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:15:21.511074 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:15:21.511098 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:15:21.511121 kernel: iommu: Default domain type: Translated Aug 13 00:15:21.511144 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:15:21.511167 kernel: efivars: Registered efivars operations Aug 13 00:15:21.511190 kernel: vgaarb: loaded Aug 13 00:15:21.511235 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:15:21.511265 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:15:21.511289 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:15:21.511311 kernel: pnp: PnP ACPI init Aug 13 00:15:21.511531 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:15:21.511565 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:15:21.511588 kernel: NET: Registered PF_INET protocol family Aug 13 00:15:21.511611 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:15:21.511634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:15:21.511663 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:15:21.511686 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:15:21.511709 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:15:21.511732 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:15:21.511755 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:15:21.511778 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:15:21.511801 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:15:21.512008 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Aug 13 00:15:21.512101 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:15:21.512132 kernel: kvm [1]: HYP mode not available Aug 13 00:15:21.512156 kernel: Initialise system trusted keyrings Aug 13 00:15:21.512178 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:15:21.513367 kernel: Key type asymmetric registered Aug 13 00:15:21.513408 kernel: Asymmetric key parser 'x509' registered Aug 13 00:15:21.513431 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:15:21.513455 kernel: io scheduler mq-deadline registered Aug 13 00:15:21.513478 kernel: io scheduler kyber registered Aug 13 00:15:21.513502 kernel: io scheduler bfq registered Aug 13 00:15:21.513538 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 13 00:15:21.513813 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Aug 13 00:15:21.514121 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Aug 13 00:15:21.514369 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.514579 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Aug 13 00:15:21.514779 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Aug 13 00:15:21.514986 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.516825 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Aug 13 00:15:21.517137 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Aug 13 00:15:21.518351 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.518601 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Aug 13 00:15:21.518804 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Aug 13 00:15:21.519015 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.519369 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Aug 13 00:15:21.519573 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Aug 13 00:15:21.519764 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.519985 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Aug 13 00:15:21.520258 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Aug 13 00:15:21.520472 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.520672 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Aug 13 00:15:21.520888 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Aug 13 00:15:21.521113 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.522407 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Aug 13 00:15:21.522623 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Aug 13 00:15:21.522829 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.522860 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Aug 13 00:15:21.523083 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Aug 13 00:15:21.523331 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Aug 13 00:15:21.523530 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 00:15:21.523561 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:15:21.523584 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:15:21.523615 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:15:21.523820 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Aug 13 00:15:21.524052 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Aug 13 00:15:21.524085 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:15:21.524109 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 13 00:15:21.525964 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Aug 13 00:15:21.526012 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Aug 13 00:15:21.526074 kernel: thunder_xcv, ver 1.0 Aug 13 00:15:21.526098 kernel: thunder_bgx, ver 1.0 Aug 13 00:15:21.526131 kernel: nicpf, ver 1.0 Aug 13 00:15:21.526155 kernel: nicvf, ver 1.0 Aug 13 00:15:21.526429 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:15:21.526621 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:15:20 UTC (1755044120) Aug 13 00:15:21.526652 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:15:21.526675 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:15:21.526698 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:15:21.526721 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:15:21.526751 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:15:21.526774 kernel: Segment Routing with IPv6 Aug 13 00:15:21.526797 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:15:21.526820 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:15:21.526842 kernel: Key type dns_resolver registered Aug 13 00:15:21.526865 kernel: registered taskstats version 1 Aug 13 00:15:21.526888 kernel: Loading compiled-in X.509 certificates Aug 13 00:15:21.526911 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:15:21.526934 kernel: Key type .fscrypt registered Aug 13 00:15:21.527311 kernel: Key type fscrypt-provisioning registered Aug 13 00:15:21.527341 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:15:21.527364 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:15:21.527387 kernel: ima: No architecture policies found Aug 13 00:15:21.527410 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:15:21.527432 kernel: clk: Disabling unused clocks Aug 13 00:15:21.527455 kernel: Freeing unused kernel memory: 39424K Aug 13 00:15:21.527478 kernel: Run /init as init process Aug 13 00:15:21.527500 kernel: with arguments: Aug 13 00:15:21.527533 kernel: /init Aug 13 00:15:21.527556 kernel: with environment: Aug 13 00:15:21.527577 kernel: HOME=/ Aug 13 00:15:21.527600 kernel: TERM=linux Aug 13 00:15:21.527621 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:15:21.527651 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:15:21.527679 systemd[1]: Detected virtualization kvm. Aug 13 00:15:21.527704 systemd[1]: Detected architecture arm64. Aug 13 00:15:21.527731 systemd[1]: Running in initrd. Aug 13 00:15:21.527754 systemd[1]: No hostname configured, using default hostname. Aug 13 00:15:21.527781 systemd[1]: Hostname set to . Aug 13 00:15:21.527806 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:15:21.527830 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:15:21.527855 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:15:21.527879 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:15:21.527905 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:15:21.527933 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:15:21.527957 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:15:21.527982 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:15:21.528011 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:15:21.528056 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:15:21.528083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:15:21.528113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:15:21.528136 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:15:21.528159 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:15:21.528181 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:15:21.530136 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:15:21.530172 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:15:21.530215 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:15:21.530243 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:15:21.530268 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:15:21.530303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:15:21.530329 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:15:21.530353 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:15:21.530377 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:15:21.530417 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:15:21.530448 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:15:21.530473 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:15:21.530497 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:15:21.530521 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:15:21.530551 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:15:21.530575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:21.530599 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:15:21.530624 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:15:21.530707 systemd-journald[235]: Collecting audit messages is disabled. Aug 13 00:15:21.530773 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:15:21.530817 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:15:21.530849 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:21.530880 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:15:21.530905 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:15:21.530930 kernel: Bridge firewalling registered Aug 13 00:15:21.530953 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:15:21.530979 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:15:21.531004 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:15:21.531049 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:15:21.531077 systemd-journald[235]: Journal started Aug 13 00:15:21.531136 systemd-journald[235]: Runtime Journal (/run/log/journal/460d07a45d4d4dccac82779cf4478266) is 8.0M, max 76.6M, 68.6M free. Aug 13 00:15:21.414771 systemd-modules-load[237]: Inserted module 'overlay' Aug 13 00:15:21.535724 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:15:21.481513 systemd-modules-load[237]: Inserted module 'br_netfilter' Aug 13 00:15:21.550526 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:15:21.560293 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:15:21.569597 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:15:21.579392 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:15:21.583385 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:15:21.594496 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:15:21.604539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:15:21.636640 dracut-cmdline[270]: dracut-dracut-053 Aug 13 00:15:21.648094 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:15:21.685826 systemd-resolved[272]: Positive Trust Anchors: Aug 13 00:15:21.685868 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:15:21.685965 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:15:21.696954 systemd-resolved[272]: Defaulting to hostname 'linux'. Aug 13 00:15:21.699178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:15:21.704521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:15:21.834263 kernel: SCSI subsystem initialized Aug 13 00:15:21.847300 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:15:21.870303 kernel: iscsi: registered transport (tcp) Aug 13 00:15:21.907675 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:15:21.907783 kernel: QLogic iSCSI HBA Driver Aug 13 00:15:22.023062 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:15:22.032625 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:15:22.088150 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:15:22.088300 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:15:22.090238 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:15:22.189279 kernel: raid6: neonx8 gen() 5170 MB/s Aug 13 00:15:22.206295 kernel: raid6: neonx4 gen() 5112 MB/s Aug 13 00:15:22.223290 kernel: raid6: neonx2 gen() 4328 MB/s Aug 13 00:15:22.240269 kernel: raid6: neonx1 gen() 3437 MB/s Aug 13 00:15:22.257285 kernel: raid6: int64x8 gen() 2277 MB/s Aug 13 00:15:22.274335 kernel: raid6: int64x4 gen() 2403 MB/s Aug 13 00:15:22.291297 kernel: raid6: int64x2 gen() 2011 MB/s Aug 13 00:15:22.309007 kernel: raid6: int64x1 gen() 1649 MB/s Aug 13 00:15:22.309139 kernel: raid6: using algorithm neonx8 gen() 5170 MB/s Aug 13 00:15:22.326986 kernel: raid6: .... xor() 3911 MB/s, rmw enabled Aug 13 00:15:22.327120 kernel: raid6: using neon recovery algorithm Aug 13 00:15:22.341303 kernel: xor: measuring software checksum speed Aug 13 00:15:22.341477 kernel: 8regs : 6172 MB/sec Aug 13 00:15:22.344796 kernel: 32regs : 6277 MB/sec Aug 13 00:15:22.344954 kernel: arm64_neon : 9027 MB/sec Aug 13 00:15:22.344987 kernel: xor: using function: arm64_neon (9027 MB/sec) Aug 13 00:15:22.487350 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:15:22.514747 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:15:22.524641 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:15:22.575698 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 00:15:22.586073 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:15:22.597517 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:15:22.644154 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Aug 13 00:15:22.708666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:15:22.717621 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:15:22.854418 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:15:22.865634 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:15:22.916505 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:15:22.920978 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:15:22.926088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:15:22.932429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:15:22.944852 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:15:23.002001 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:15:23.103290 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:15:23.118231 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Aug 13 00:15:23.118370 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:15:23.149659 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:15:23.149807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:15:23.157633 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:15:23.159622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:15:23.159757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:23.168492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:23.180231 kernel: ACPI: bus type USB registered Aug 13 00:15:23.182861 kernel: usbcore: registered new interface driver usbfs Aug 13 00:15:23.182930 kernel: usbcore: registered new interface driver hub Aug 13 00:15:23.187194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:23.199729 kernel: usbcore: registered new device driver usb Aug 13 00:15:23.236454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:23.246556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:15:23.259876 kernel: sr 0:0:0:0: Power-on or device reset occurred Aug 13 00:15:23.271655 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Aug 13 00:15:23.273276 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:15:23.277221 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Aug 13 00:15:23.277580 kernel: sd 0:0:0:1: Power-on or device reset occurred Aug 13 00:15:23.277852 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Aug 13 00:15:23.278608 kernel: sd 0:0:0:1: [sda] Write Protect is off Aug 13 00:15:23.282237 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Aug 13 00:15:23.282738 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:15:23.303167 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:15:23.303298 kernel: GPT:17805311 != 80003071 Aug 13 00:15:23.304706 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:15:23.306876 kernel: GPT:17805311 != 80003071 Aug 13 00:15:23.306948 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:15:23.308187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:15:23.310243 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Aug 13 00:15:23.321800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:15:23.331894 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Aug 13 00:15:23.332398 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Aug 13 00:15:23.332661 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 13 00:15:23.338306 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Aug 13 00:15:23.338701 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Aug 13 00:15:23.338945 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Aug 13 00:15:23.339815 kernel: hub 1-0:1.0: USB hub found Aug 13 00:15:23.343548 kernel: hub 1-0:1.0: 4 ports detected Aug 13 00:15:23.349371 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 13 00:15:23.352506 kernel: hub 2-0:1.0: USB hub found Aug 13 00:15:23.355251 kernel: hub 2-0:1.0: 4 ports detected Aug 13 00:15:23.417232 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (510) Aug 13 00:15:23.423285 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (502) Aug 13 00:15:23.431237 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:15:23.472229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:15:23.491399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:15:23.504612 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:15:23.506622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:15:23.517602 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:15:23.542464 disk-uuid[576]: Primary Header is updated. Aug 13 00:15:23.542464 disk-uuid[576]: Secondary Entries is updated. Aug 13 00:15:23.542464 disk-uuid[576]: Secondary Header is updated. Aug 13 00:15:23.557249 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:15:23.590729 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 13 00:15:23.791847 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Aug 13 00:15:23.791942 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Aug 13 00:15:23.796369 kernel: usbcore: registered new interface driver usbhid Aug 13 00:15:23.796464 kernel: usbhid: USB HID core driver Aug 13 00:15:23.846274 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Aug 13 00:15:23.977278 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Aug 13 00:15:24.032266 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Aug 13 00:15:24.582262 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:15:24.584251 disk-uuid[577]: The operation has completed successfully. Aug 13 00:15:24.706975 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:15:24.707460 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:15:24.753625 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:15:24.780388 sh[594]: Success Aug 13 00:15:24.809241 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:15:24.918972 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:15:24.922770 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:15:24.931467 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:15:24.976485 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:15:24.976578 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:15:24.976611 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:15:24.978307 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:15:24.979772 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:15:24.997326 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:15:25.000705 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:15:25.003358 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:15:25.011555 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:15:25.017541 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:15:25.039565 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:15:25.039651 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:15:25.040767 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:15:25.047777 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:15:25.047870 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:15:25.070538 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:15:25.076246 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:15:25.089300 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:15:25.098661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:15:25.290998 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:15:25.308710 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:15:25.315165 ignition[679]: Ignition 2.19.0 Aug 13 00:15:25.315355 ignition[679]: Stage: fetch-offline Aug 13 00:15:25.315448 ignition[679]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:25.315470 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:15:25.317224 ignition[679]: parsed url from cmdline: "" Aug 13 00:15:25.317240 ignition[679]: no config URL provided Aug 13 00:15:25.317257 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:15:25.317285 ignition[679]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:15:25.317298 ignition[679]: failed to fetch config: resource requires networking Aug 13 00:15:25.328578 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:15:25.317735 ignition[679]: Ignition finished successfully Aug 13 00:15:25.375893 systemd-networkd[785]: lo: Link UP Aug 13 00:15:25.375925 systemd-networkd[785]: lo: Gained carrier Aug 13 00:15:25.380465 systemd-networkd[785]: Enumeration completed Aug 13 00:15:25.381674 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:15:25.381682 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:15:25.383183 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:15:25.383191 systemd-networkd[785]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:15:25.384408 systemd-networkd[785]: eth0: Link UP Aug 13 00:15:25.384416 systemd-networkd[785]: eth0: Gained carrier Aug 13 00:15:25.384433 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:15:25.386561 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:15:25.388886 systemd[1]: Reached target network.target - Network. Aug 13 00:15:25.395862 systemd-networkd[785]: eth1: Link UP Aug 13 00:15:25.395871 systemd-networkd[785]: eth1: Gained carrier Aug 13 00:15:25.395892 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:15:25.400689 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:15:25.435250 ignition[788]: Ignition 2.19.0 Aug 13 00:15:25.435273 ignition[788]: Stage: fetch Aug 13 00:15:25.435675 ignition[788]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:25.435699 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:15:25.435900 ignition[788]: parsed url from cmdline: "" Aug 13 00:15:25.435908 ignition[788]: no config URL provided Aug 13 00:15:25.435919 ignition[788]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:15:25.435936 ignition[788]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:15:25.435973 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Aug 13 00:15:25.436948 ignition[788]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:15:25.453329 systemd-networkd[785]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Aug 13 00:15:25.466393 systemd-networkd[785]: eth0: DHCPv4 address 138.201.175.117/32, gateway 172.31.1.1 acquired from 172.31.1.1 Aug 13 00:15:25.637227 ignition[788]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Aug 13 00:15:25.647551 ignition[788]: GET result: OK Aug 13 00:15:25.647782 ignition[788]: parsing config with SHA512: 4ebe69c079fa337b4624b9e4f81ba46eb53047a78f3ca2acedf4a092d89fca8da3a1afecf86887110a3cd5e93ab914097d5c3602393ff6e7f6ef08ad6e6297af Aug 13 00:15:25.658610 unknown[788]: fetched base config from "system" Aug 13 00:15:25.658652 unknown[788]: fetched base config from "system" Aug 13 00:15:25.659830 ignition[788]: fetch: fetch complete Aug 13 00:15:25.658668 unknown[788]: fetched user config from "hetzner" Aug 13 00:15:25.659843 ignition[788]: fetch: fetch passed Aug 13 00:15:25.663467 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:15:25.659944 ignition[788]: Ignition finished successfully Aug 13 00:15:25.674736 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:15:25.707192 ignition[795]: Ignition 2.19.0 Aug 13 00:15:25.707248 ignition[795]: Stage: kargs Aug 13 00:15:25.707649 ignition[795]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:25.707673 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:15:25.710257 ignition[795]: kargs: kargs passed Aug 13 00:15:25.710412 ignition[795]: Ignition finished successfully Aug 13 00:15:25.717979 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:15:25.726609 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:15:25.763812 ignition[802]: Ignition 2.19.0 Aug 13 00:15:25.763838 ignition[802]: Stage: disks Aug 13 00:15:25.764322 ignition[802]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:25.764348 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:15:25.767057 ignition[802]: disks: disks passed Aug 13 00:15:25.773134 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:15:25.767181 ignition[802]: Ignition finished successfully Aug 13 00:15:25.775558 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:15:25.778017 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:15:25.780840 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:15:25.783961 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:15:25.787275 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:15:25.795531 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:15:25.841794 systemd-fsck[811]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 00:15:25.846755 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:15:25.856493 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:15:25.987270 kernel: EXT4-fs (sda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:15:25.988878 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:15:25.991238 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:15:26.005444 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:15:26.011446 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:15:26.017887 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:15:26.021462 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:15:26.021530 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:15:26.039261 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (819) Aug 13 00:15:26.044747 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:15:26.044829 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:15:26.049789 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:15:26.057172 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:15:26.064683 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:15:26.064736 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:15:26.075268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:15:26.088408 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:15:26.175240 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:15:26.179730 coreos-metadata[821]: Aug 13 00:15:26.179 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Aug 13 00:15:26.182940 coreos-metadata[821]: Aug 13 00:15:26.181 INFO Fetch successful Aug 13 00:15:26.184867 coreos-metadata[821]: Aug 13 00:15:26.183 INFO wrote hostname ci-4081-3-5-0-684996fd0b to /sysroot/etc/hostname Aug 13 00:15:26.188460 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:15:26.192906 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:15:26.203650 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:15:26.212088 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:15:26.406673 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:15:26.418434 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:15:26.423514 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:15:26.444320 kernel: BTRFS info (device sda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:15:26.444898 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:15:26.496347 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:15:26.502195 ignition[936]: INFO : Ignition 2.19.0 Aug 13 00:15:26.502195 ignition[936]: INFO : Stage: mount Aug 13 00:15:26.505283 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:26.505283 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:15:26.505283 ignition[936]: INFO : mount: mount passed Aug 13 00:15:26.511759 ignition[936]: INFO : Ignition finished successfully Aug 13 00:15:26.512389 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:15:26.524472 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:15:27.001559 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:15:27.019404 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (947) Aug 13 00:15:27.023425 kernel: BTRFS info (device sda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:15:27.023497 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:15:27.023528 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:15:27.031456 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:15:27.031570 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:15:27.037810 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:15:27.073389 systemd-networkd[785]: eth1: Gained IPv6LL Aug 13 00:15:27.089650 ignition[964]: INFO : Ignition 2.19.0 Aug 13 00:15:27.089650 ignition[964]: INFO : Stage: files Aug 13 00:15:27.092884 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:27.092884 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:15:27.092884 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:15:27.100184 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:15:27.100184 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:15:27.105176 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:15:27.105176 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:15:27.110862 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:15:27.107682 unknown[964]: wrote ssh authorized keys file for user: core Aug 13 00:15:27.115934 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:15:27.115934 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:15:27.115934 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:15:27.115934 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:15:27.225036 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:15:27.329424 systemd-networkd[785]: eth0: Gained IPv6LL Aug 13 00:15:27.460469 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:15:27.464368 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:15:27.621059 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:15:28.042370 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:15:28.042370 ignition[964]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:15:28.051380 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:15:28.095982 ignition[964]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:15:28.095982 ignition[964]: INFO : files: files passed Aug 13 00:15:28.095982 ignition[964]: INFO : Ignition finished successfully Aug 13 00:15:28.065325 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:15:28.074725 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:15:28.097458 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:15:28.107701 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:15:28.107926 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:15:28.129730 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:15:28.129730 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:15:28.137388 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:15:28.141796 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:15:28.144368 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:15:28.153679 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:15:28.225849 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:15:28.227528 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:15:28.232195 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:15:28.235812 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:15:28.238974 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:15:28.246709 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:15:28.291852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:15:28.300631 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:15:28.341812 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:15:28.345851 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:15:28.348062 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:15:28.351138 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:15:28.351452 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:15:28.355577 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:15:28.357435 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:15:28.360615 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:15:28.363720 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:15:28.366850 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:15:28.370194 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:15:28.373448 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:15:28.378528 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:15:28.381965 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:15:28.385022 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:15:28.387866 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:15:28.388182 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:15:28.392120 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:15:28.394294 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:15:28.397671 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:15:28.399061 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:15:28.401300 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:15:28.401572 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:15:28.406345 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:15:28.406652 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:15:28.410718 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:15:28.410956 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:15:28.413745 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:15:28.414012 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:15:28.424685 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:15:28.443375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:15:28.445431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:15:28.445759 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:15:28.450763 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:15:28.451066 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:15:28.470484 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:15:28.470680 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:15:28.488942 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:15:28.495952 ignition[1016]: INFO : Ignition 2.19.0 Aug 13 00:15:28.495952 ignition[1016]: INFO : Stage: umount Aug 13 00:15:28.495952 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:15:28.495952 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Aug 13 00:15:28.503747 ignition[1016]: INFO : umount: umount passed Aug 13 00:15:28.503747 ignition[1016]: INFO : Ignition finished successfully Aug 13 00:15:28.503101 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:15:28.503356 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:15:28.505902 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:15:28.506135 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:15:28.508898 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:15:28.509112 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:15:28.511287 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:15:28.511404 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:15:28.514051 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:15:28.514144 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:15:28.516710 systemd[1]: Stopped target network.target - Network. Aug 13 00:15:28.519101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:15:28.519236 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:15:28.522114 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:15:28.524658 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:15:28.531330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:15:28.534175 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:15:28.536703 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:15:28.539919 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:15:28.540039 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:15:28.542939 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:15:28.543053 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:15:28.545719 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:15:28.545829 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:15:28.548691 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:15:28.548788 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:15:28.552361 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:15:28.552455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:15:28.555531 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:15:28.558071 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:15:28.563414 systemd-networkd[785]: eth0: DHCPv6 lease lost Aug 13 00:15:28.563926 systemd-networkd[785]: eth1: DHCPv6 lease lost Aug 13 00:15:28.569486 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:15:28.569794 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:15:28.572429 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:15:28.572504 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:15:28.584518 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:15:28.590399 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:15:28.590531 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:15:28.595888 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:15:28.606960 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:15:28.607495 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:15:28.626844 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:15:28.627359 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:15:28.643417 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:15:28.643563 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:15:28.646040 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:15:28.646138 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:15:28.650191 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:15:28.650326 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:15:28.654977 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:15:28.655460 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:15:28.660193 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:15:28.660329 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:15:28.672646 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:15:28.676418 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:15:28.676568 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:15:28.680666 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:15:28.680781 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:15:28.685384 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:15:28.685493 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:15:28.688497 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:15:28.688599 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:15:28.691801 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:15:28.691899 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:15:28.695360 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:15:28.695455 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:15:28.698944 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:15:28.699061 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:28.707700 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:15:28.707901 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:15:28.726867 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:15:28.727128 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:15:28.731579 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:15:28.742562 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:15:28.763466 systemd[1]: Switching root. Aug 13 00:15:28.801340 systemd-journald[235]: Journal stopped Aug 13 00:15:30.780768 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Aug 13 00:15:30.780922 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:15:30.780957 kernel: SELinux: policy capability open_perms=1 Aug 13 00:15:30.781005 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:15:30.781039 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:15:30.781068 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:15:30.781097 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:15:30.781125 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:15:30.781154 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:15:30.781190 kernel: audit: type=1403 audit(1755044129.147:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:15:30.781251 systemd[1]: Successfully loaded SELinux policy in 65.895ms. Aug 13 00:15:30.781313 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.830ms. Aug 13 00:15:30.781350 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:15:30.781392 systemd[1]: Detected virtualization kvm. Aug 13 00:15:30.781427 systemd[1]: Detected architecture arm64. Aug 13 00:15:30.781457 systemd[1]: Detected first boot. Aug 13 00:15:30.781489 systemd[1]: Hostname set to . Aug 13 00:15:30.781525 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:15:30.781556 zram_generator::config[1075]: No configuration found. Aug 13 00:15:30.781588 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:15:30.781619 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:15:30.781651 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:15:30.781684 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:15:30.781716 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:15:30.781748 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:15:30.781783 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:15:30.781816 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:15:30.781848 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:15:30.781880 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:15:30.781912 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:15:30.781943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:15:30.782018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:15:30.782061 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:15:30.782095 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:15:30.782142 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:15:30.782175 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:15:30.782253 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:15:30.782291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:15:30.782324 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:15:30.782355 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:15:30.782387 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:15:30.782427 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:15:30.782463 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:15:30.782514 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:15:30.782562 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:15:30.782599 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:15:30.782633 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:15:30.782678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:15:30.782725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:15:30.782760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:15:30.782799 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:15:30.782834 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:15:30.782869 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:15:30.782907 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:15:30.782939 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:15:30.782971 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:15:30.783024 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:15:30.783057 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:15:30.783090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:15:30.783130 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:15:30.783167 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:15:30.783222 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:15:30.783261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:15:30.783293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:15:30.783331 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:15:30.783364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:15:30.783397 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:15:30.783430 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:15:30.783462 kernel: fuse: init (API version 7.39) Aug 13 00:15:30.783499 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:15:30.783531 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:15:30.783564 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:15:30.783599 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:15:30.783632 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:15:30.783663 kernel: loop: module loaded Aug 13 00:15:30.783694 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:15:30.783726 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:15:30.783759 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:15:30.783792 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:15:30.783824 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:15:30.783908 systemd-journald[1165]: Collecting audit messages is disabled. Aug 13 00:15:30.784022 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:15:30.784075 systemd-journald[1165]: Journal started Aug 13 00:15:30.784139 systemd-journald[1165]: Runtime Journal (/run/log/journal/460d07a45d4d4dccac82779cf4478266) is 8.0M, max 76.6M, 68.6M free. Aug 13 00:15:30.794241 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:15:30.796233 kernel: ACPI: bus type drm_connector registered Aug 13 00:15:30.798308 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:15:30.800925 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:15:30.807541 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:15:30.808668 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:15:30.815664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:15:30.816084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:15:30.822662 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:15:30.825860 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:15:30.826431 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:15:30.829229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:15:30.829602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:15:30.832412 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:15:30.832765 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:15:30.835714 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:15:30.836157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:15:30.842420 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:15:30.847368 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:15:30.853604 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:15:30.883484 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:15:30.892434 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:15:30.905422 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:15:30.908524 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:15:30.929642 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:15:30.942089 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:15:30.951400 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:15:30.960553 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:15:30.965613 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:15:30.975679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:15:30.992674 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:15:31.006704 systemd-journald[1165]: Time spent on flushing to /var/log/journal/460d07a45d4d4dccac82779cf4478266 is 114.332ms for 1110 entries. Aug 13 00:15:31.006704 systemd-journald[1165]: System Journal (/var/log/journal/460d07a45d4d4dccac82779cf4478266) is 8.0M, max 584.8M, 576.8M free. Aug 13 00:15:31.143381 systemd-journald[1165]: Received client request to flush runtime journal. Aug 13 00:15:31.021149 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:15:31.025750 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:15:31.065544 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:15:31.073864 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:15:31.155547 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:15:31.161623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:15:31.170396 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Aug 13 00:15:31.170442 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Aug 13 00:15:31.171761 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:15:31.196710 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:15:31.202375 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:15:31.214926 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:15:31.236257 udevadm[1228]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:15:31.318087 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:15:31.329675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:15:31.375167 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Aug 13 00:15:31.375857 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Aug 13 00:15:31.391142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:15:32.312079 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:15:32.323551 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:15:32.390239 systemd-udevd[1240]: Using default interface naming scheme 'v255'. Aug 13 00:15:32.444040 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:15:32.464536 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:15:32.512829 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:15:32.629778 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Aug 13 00:15:32.689922 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:15:32.927289 systemd-networkd[1246]: lo: Link UP Aug 13 00:15:32.927309 systemd-networkd[1246]: lo: Gained carrier Aug 13 00:15:32.932813 systemd-networkd[1246]: Enumeration completed Aug 13 00:15:32.933519 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:15:32.938140 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:15:32.938154 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:15:32.942395 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:15:32.942410 systemd-networkd[1246]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:15:32.944459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:15:32.947816 systemd-networkd[1246]: eth0: Link UP Aug 13 00:15:32.947850 systemd-networkd[1246]: eth0: Gained carrier Aug 13 00:15:32.947885 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:15:32.950956 systemd-networkd[1246]: eth1: Link UP Aug 13 00:15:32.951148 systemd-networkd[1246]: eth1: Gained carrier Aug 13 00:15:32.951463 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:15:33.022314 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1260) Aug 13 00:15:33.019377 systemd-networkd[1246]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Aug 13 00:15:33.023382 systemd-networkd[1246]: eth0: DHCPv4 address 138.201.175.117/32, gateway 172.31.1.1 acquired from 172.31.1.1 Aug 13 00:15:33.054245 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:15:33.109591 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Aug 13 00:15:33.125468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:15:33.132659 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:15:33.140769 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:15:33.161229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:15:33.168366 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:15:33.168507 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:15:33.208928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:15:33.209527 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:15:33.225520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:15:33.225915 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:15:33.235464 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:15:33.236030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:15:33.239744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:15:33.239887 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:15:33.243353 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Aug 13 00:15:33.243455 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 00:15:33.243494 kernel: [drm] features: -context_init Aug 13 00:15:33.252777 kernel: [drm] number of scanouts: 1 Aug 13 00:15:33.252894 kernel: [drm] number of cap sets: 0 Aug 13 00:15:33.261230 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Aug 13 00:15:33.265651 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:15:33.293170 kernel: Console: switching to colour frame buffer device 160x50 Aug 13 00:15:33.316301 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Aug 13 00:15:33.330790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:33.345801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:15:33.346552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:33.360501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:15:33.465169 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:15:33.479724 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:15:33.483116 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:15:33.520668 lvm[1309]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:15:33.568356 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:15:33.571323 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:15:33.581553 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:15:33.602596 lvm[1314]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:15:33.650604 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:15:33.653651 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:15:33.655977 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:15:33.656356 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:15:33.658220 systemd[1]: Reached target machines.target - Containers. Aug 13 00:15:33.662564 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:15:33.672563 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:15:33.683772 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:15:33.686138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:15:33.690526 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:15:33.708554 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:15:33.725575 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:15:33.738619 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:15:33.777835 kernel: loop0: detected capacity change from 0 to 114432 Aug 13 00:15:33.791639 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:15:33.798326 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:15:33.804545 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:15:33.831815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:15:33.855283 kernel: loop1: detected capacity change from 0 to 114328 Aug 13 00:15:33.907367 kernel: loop2: detected capacity change from 0 to 203944 Aug 13 00:15:33.960388 kernel: loop3: detected capacity change from 0 to 8 Aug 13 00:15:33.986501 kernel: loop4: detected capacity change from 0 to 114432 Aug 13 00:15:34.012427 kernel: loop5: detected capacity change from 0 to 114328 Aug 13 00:15:34.049248 kernel: loop6: detected capacity change from 0 to 203944 Aug 13 00:15:34.083405 kernel: loop7: detected capacity change from 0 to 8 Aug 13 00:15:34.084408 (sd-merge)[1335]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Aug 13 00:15:34.086461 (sd-merge)[1335]: Merged extensions into '/usr'. Aug 13 00:15:34.126107 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:15:34.126142 systemd[1]: Reloading... Aug 13 00:15:34.308985 systemd-networkd[1246]: eth1: Gained IPv6LL Aug 13 00:15:34.323449 zram_generator::config[1363]: No configuration found. Aug 13 00:15:34.606307 ldconfig[1318]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:15:34.666293 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:34.689447 systemd-networkd[1246]: eth0: Gained IPv6LL Aug 13 00:15:34.850119 systemd[1]: Reloading finished in 723 ms. Aug 13 00:15:34.877296 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:15:34.880479 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:15:34.883424 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:15:34.902505 systemd[1]: Starting ensure-sysext.service... Aug 13 00:15:34.912602 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:15:34.930120 systemd[1]: Reloading requested from client PID 1409 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:15:34.930167 systemd[1]: Reloading... Aug 13 00:15:34.964633 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:15:34.967669 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:15:34.970758 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:15:34.972255 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Aug 13 00:15:34.972917 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Aug 13 00:15:34.984601 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:15:34.984870 systemd-tmpfiles[1410]: Skipping /boot Aug 13 00:15:35.008941 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:15:35.009293 systemd-tmpfiles[1410]: Skipping /boot Aug 13 00:15:35.113311 zram_generator::config[1439]: No configuration found. Aug 13 00:15:35.375712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:15:35.559363 systemd[1]: Reloading finished in 628 ms. Aug 13 00:15:35.584044 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:15:35.611570 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:15:35.626580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:15:35.635675 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:15:35.654584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:15:35.665755 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:15:35.708040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:15:35.713812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:15:35.722393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:15:35.731419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:15:35.737707 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:15:35.767648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:15:35.768346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:15:35.780529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:15:35.794486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:15:35.813487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:15:35.818525 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:15:35.845526 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:15:35.854814 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:15:35.855465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:15:35.863332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:15:35.863785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:15:35.866943 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:15:35.868416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:15:35.875026 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:15:35.879056 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:15:35.901252 systemd[1]: Finished ensure-sysext.service. Aug 13 00:15:35.921543 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:15:35.924751 augenrules[1524]: No rules Aug 13 00:15:35.934309 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:15:35.940027 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:15:35.943405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:15:35.950705 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:15:35.972585 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:15:35.977848 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:15:36.017295 systemd-resolved[1487]: Positive Trust Anchors: Aug 13 00:15:36.017328 systemd-resolved[1487]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:15:36.017451 systemd-resolved[1487]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:15:36.029130 systemd-resolved[1487]: Using system hostname 'ci-4081-3-5-0-684996fd0b'. Aug 13 00:15:36.034463 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:15:36.037651 systemd[1]: Reached target network.target - Network. Aug 13 00:15:36.041385 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:15:36.043965 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:15:36.046921 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:15:36.144901 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:15:36.148144 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:15:36.150432 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:15:36.152517 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:15:36.154613 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:15:36.156642 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:15:36.156724 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:15:36.158260 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:15:36.160420 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:15:36.162467 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:15:36.165771 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:15:36.169807 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:15:36.174864 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:15:36.179414 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:15:36.182732 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:15:36.184694 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:15:36.186326 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:15:36.188373 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:15:36.188475 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:15:36.188530 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:15:36.199415 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:15:36.207596 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:15:36.219681 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:15:36.229049 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:15:36.257850 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:15:36.262354 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:15:36.275239 jq[1545]: false Aug 13 00:15:36.279560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:36.300871 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:15:36.324059 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:15:36.325717 coreos-metadata[1541]: Aug 13 00:15:36.325 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Aug 13 00:15:36.330907 coreos-metadata[1541]: Aug 13 00:15:36.329 INFO Fetch successful Aug 13 00:15:36.337718 coreos-metadata[1541]: Aug 13 00:15:36.333 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Aug 13 00:15:36.337718 coreos-metadata[1541]: Aug 13 00:15:36.333 INFO Fetch successful Aug 13 00:15:36.347639 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:15:36.354509 systemd-timesyncd[1532]: Contacted time server 85.220.190.246:123 (0.flatcar.pool.ntp.org). Aug 13 00:15:36.352043 dbus-daemon[1542]: [system] SELinux support is enabled Aug 13 00:15:36.354643 systemd-timesyncd[1532]: Initial clock synchronization to Wed 2025-08-13 00:15:36.574062 UTC. Aug 13 00:15:36.373113 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Aug 13 00:15:36.386040 extend-filesystems[1546]: Found loop4 Aug 13 00:15:36.386040 extend-filesystems[1546]: Found loop5 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found loop6 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found loop7 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found sda Aug 13 00:15:36.397454 extend-filesystems[1546]: Found sda1 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found sda2 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found sda3 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found usr Aug 13 00:15:36.397454 extend-filesystems[1546]: Found sda4 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found sda6 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found sda7 Aug 13 00:15:36.397454 extend-filesystems[1546]: Found sda9 Aug 13 00:15:36.397454 extend-filesystems[1546]: Checking size of /dev/sda9 Aug 13 00:15:36.390565 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:15:36.426679 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:15:36.455513 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:15:36.461920 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:15:36.482011 extend-filesystems[1546]: Resized partition /dev/sda9 Aug 13 00:15:36.492579 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:15:36.501414 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:15:36.509236 extend-filesystems[1575]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:15:36.510469 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:15:36.545117 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:15:36.545706 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:15:36.561168 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:15:36.561769 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:15:36.576232 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Aug 13 00:15:36.580715 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:15:36.592636 jq[1580]: true Aug 13 00:15:36.640675 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:15:36.642383 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:15:36.720261 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:15:36.729905 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:15:36.733877 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:15:36.733930 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:15:36.737569 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:15:36.737615 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:15:36.761391 tar[1590]: linux-arm64/helm Aug 13 00:15:36.762025 jq[1591]: true Aug 13 00:15:36.829450 update_engine[1578]: I20250813 00:15:36.815377 1578 main.cc:92] Flatcar Update Engine starting Aug 13 00:15:36.842824 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:15:36.851599 update_engine[1578]: I20250813 00:15:36.844852 1578 update_check_scheduler.cc:74] Next update check in 3m35s Aug 13 00:15:36.863935 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:15:36.871522 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:15:36.958341 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Aug 13 00:15:36.991235 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:15:37.023269 extend-filesystems[1575]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:15:37.023269 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 5 Aug 13 00:15:37.023269 extend-filesystems[1575]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Aug 13 00:15:37.004984 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:15:37.070745 extend-filesystems[1546]: Resized filesystem in /dev/sda9 Aug 13 00:15:37.070745 extend-filesystems[1546]: Found sr0 Aug 13 00:15:37.030497 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:15:37.031102 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:15:37.165255 bash[1639]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:15:37.170194 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:15:37.193282 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1640) Aug 13 00:15:37.240479 systemd[1]: Starting sshkeys.service... Aug 13 00:15:37.253699 systemd-logind[1570]: New seat seat0. Aug 13 00:15:37.282609 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:15:37.282658 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Aug 13 00:15:37.283313 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:15:37.328435 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:15:37.451336 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:15:37.640915 locksmithd[1615]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:15:37.684274 containerd[1592]: time="2025-08-13T00:15:37.680924207Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:15:37.717253 coreos-metadata[1657]: Aug 13 00:15:37.715 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Aug 13 00:15:37.719537 coreos-metadata[1657]: Aug 13 00:15:37.719 INFO Fetch successful Aug 13 00:15:37.725690 unknown[1657]: wrote ssh authorized keys file for user: core Aug 13 00:15:37.802103 update-ssh-keys[1669]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:15:37.808137 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:15:37.831133 systemd[1]: Finished sshkeys.service. Aug 13 00:15:37.943336 containerd[1592]: time="2025-08-13T00:15:37.940003348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:37.955086 containerd[1592]: time="2025-08-13T00:15:37.954992326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:37.955086 containerd[1592]: time="2025-08-13T00:15:37.955079250Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:15:37.957321 containerd[1592]: time="2025-08-13T00:15:37.955121993Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:15:37.957321 containerd[1592]: time="2025-08-13T00:15:37.955610000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:15:37.957321 containerd[1592]: time="2025-08-13T00:15:37.955680114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:37.957321 containerd[1592]: time="2025-08-13T00:15:37.955855319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:37.957321 containerd[1592]: time="2025-08-13T00:15:37.955898801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:37.960770 containerd[1592]: time="2025-08-13T00:15:37.960668193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:37.960883 containerd[1592]: time="2025-08-13T00:15:37.960763378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:37.960883 containerd[1592]: time="2025-08-13T00:15:37.960824410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:37.961021 containerd[1592]: time="2025-08-13T00:15:37.960855604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:37.961278 containerd[1592]: time="2025-08-13T00:15:37.961197176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:37.966612 containerd[1592]: time="2025-08-13T00:15:37.966543801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:15:37.968255 containerd[1592]: time="2025-08-13T00:15:37.967167516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:15:37.968255 containerd[1592]: time="2025-08-13T00:15:37.967250125Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:15:37.968255 containerd[1592]: time="2025-08-13T00:15:37.967572627Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:15:37.969315 containerd[1592]: time="2025-08-13T00:15:37.969260637Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:15:37.980286 containerd[1592]: time="2025-08-13T00:15:37.980187246Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:15:37.980451 containerd[1592]: time="2025-08-13T00:15:37.980331585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:15:37.980451 containerd[1592]: time="2025-08-13T00:15:37.980373917Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:15:37.980451 containerd[1592]: time="2025-08-13T00:15:37.980411193Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:15:37.980658 containerd[1592]: time="2025-08-13T00:15:37.980467992Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.980785069Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.981666104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.981938178Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.981982400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.982015279Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.982057858Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.982092052Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.982123205Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:15:37.982240 containerd[1592]: time="2025-08-13T00:15:37.982160440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:15:37.985943 containerd[1592]: time="2025-08-13T00:15:37.985869771Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:15:37.986055 containerd[1592]: time="2025-08-13T00:15:37.985956284Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:15:37.986055 containerd[1592]: time="2025-08-13T00:15:37.985999027Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:15:37.986055 containerd[1592]: time="2025-08-13T00:15:37.986034372Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:15:37.986265 containerd[1592]: time="2025-08-13T00:15:37.986085828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.986265 containerd[1592]: time="2025-08-13T00:15:37.986124584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.986265 containerd[1592]: time="2025-08-13T00:15:37.986159518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.986265 containerd[1592]: time="2025-08-13T00:15:37.986194534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.986265 containerd[1592]: time="2025-08-13T00:15:37.986251867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.986297117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988080722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988125232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988160783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988242117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988280997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988315890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988372483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988426692Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988512876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988550029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988579374Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988757619Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:15:37.989238 containerd[1592]: time="2025-08-13T00:15:37.988799252Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:15:37.990199 containerd[1592]: time="2025-08-13T00:15:37.988830775Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:15:37.990199 containerd[1592]: time="2025-08-13T00:15:37.988862093Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:15:37.990199 containerd[1592]: time="2025-08-13T00:15:37.988889547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.990199 containerd[1592]: time="2025-08-13T00:15:37.988943920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:15:37.990199 containerd[1592]: time="2025-08-13T00:15:37.988969977Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:15:37.990199 containerd[1592]: time="2025-08-13T00:15:37.988998705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:15:37.995091 containerd[1592]: time="2025-08-13T00:15:37.993892257Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:15:37.995091 containerd[1592]: time="2025-08-13T00:15:37.994078516Z" level=info msg="Connect containerd service" Aug 13 00:15:37.995091 containerd[1592]: time="2025-08-13T00:15:37.994280928Z" level=info msg="using legacy CRI server" Aug 13 00:15:37.995091 containerd[1592]: time="2025-08-13T00:15:37.994302258Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:15:37.995091 containerd[1592]: time="2025-08-13T00:15:37.994505369Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:15:38.004290 containerd[1592]: time="2025-08-13T00:15:38.003617244Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:15:38.009285 containerd[1592]: time="2025-08-13T00:15:38.006758578Z" level=info msg="Start subscribing containerd event" Aug 13 00:15:38.009285 containerd[1592]: time="2025-08-13T00:15:38.008125215Z" level=info msg="Start recovering state" Aug 13 00:15:38.009285 containerd[1592]: time="2025-08-13T00:15:38.008378193Z" level=info msg="Start event monitor" Aug 13 00:15:38.009285 containerd[1592]: time="2025-08-13T00:15:38.008417721Z" level=info msg="Start snapshots syncer" Aug 13 00:15:38.009285 containerd[1592]: time="2025-08-13T00:15:38.008443814Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:15:38.009285 containerd[1592]: time="2025-08-13T00:15:38.008476255Z" level=info msg="Start streaming server" Aug 13 00:15:38.016600 containerd[1592]: time="2025-08-13T00:15:38.013452989Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:15:38.016600 containerd[1592]: time="2025-08-13T00:15:38.013621586Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:15:38.016600 containerd[1592]: time="2025-08-13T00:15:38.013760569Z" level=info msg="containerd successfully booted in 0.344615s" Aug 13 00:15:38.014017 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:15:39.457421 tar[1590]: linux-arm64/LICENSE Aug 13 00:15:39.462239 tar[1590]: linux-arm64/README.md Aug 13 00:15:39.513051 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:15:39.662635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:39.679173 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:40.566570 sshd_keygen[1585]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:15:40.634393 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:15:40.648858 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:15:40.662793 systemd[1]: Started sshd@0-138.201.175.117:22-139.178.89.65:49480.service - OpenSSH per-connection server daemon (139.178.89.65:49480). Aug 13 00:15:40.703322 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:15:40.704373 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:15:40.718071 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:15:40.761914 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:15:40.778765 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:15:40.793979 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:15:40.799835 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:15:40.802769 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:15:40.805368 systemd[1]: Startup finished in 10.104s (kernel) + 11.723s (userspace) = 21.828s. Aug 13 00:15:40.992991 kubelet[1691]: E0813 00:15:40.992917 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:40.999038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:41.001019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:41.808648 sshd[1707]: Accepted publickey for core from 139.178.89.65 port 49480 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:41.813764 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:41.842332 systemd-logind[1570]: New session 1 of user core. Aug 13 00:15:41.845286 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:15:41.862657 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:15:41.892898 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:15:41.908808 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:15:41.936666 (systemd)[1730]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:15:42.220992 systemd[1730]: Queued start job for default target default.target. Aug 13 00:15:42.221936 systemd[1730]: Created slice app.slice - User Application Slice. Aug 13 00:15:42.221999 systemd[1730]: Reached target paths.target - Paths. Aug 13 00:15:42.222030 systemd[1730]: Reached target timers.target - Timers. Aug 13 00:15:42.233637 systemd[1730]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:15:42.250928 systemd[1730]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:15:42.251430 systemd[1730]: Reached target sockets.target - Sockets. Aug 13 00:15:42.251688 systemd[1730]: Reached target basic.target - Basic System. Aug 13 00:15:42.251927 systemd[1730]: Reached target default.target - Main User Target. Aug 13 00:15:42.251991 systemd[1730]: Startup finished in 300ms. Aug 13 00:15:42.252595 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:15:42.259544 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:15:42.995850 systemd[1]: Started sshd@1-138.201.175.117:22-139.178.89.65:40456.service - OpenSSH per-connection server daemon (139.178.89.65:40456). Aug 13 00:15:44.031588 sshd[1742]: Accepted publickey for core from 139.178.89.65 port 40456 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:44.034810 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:44.049027 systemd-logind[1570]: New session 2 of user core. Aug 13 00:15:44.057635 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:15:44.735143 sshd[1742]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:44.744377 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:15:44.745938 systemd[1]: sshd@1-138.201.175.117:22-139.178.89.65:40456.service: Deactivated successfully. Aug 13 00:15:44.751140 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:15:44.754835 systemd-logind[1570]: Removed session 2. Aug 13 00:15:44.908898 systemd[1]: Started sshd@2-138.201.175.117:22-139.178.89.65:40462.service - OpenSSH per-connection server daemon (139.178.89.65:40462). Aug 13 00:15:45.942412 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 40462 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:45.945719 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:45.955832 systemd-logind[1570]: New session 3 of user core. Aug 13 00:15:45.967898 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:15:46.636396 sshd[1750]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:46.644487 systemd[1]: sshd@2-138.201.175.117:22-139.178.89.65:40462.service: Deactivated successfully. Aug 13 00:15:46.650840 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:15:46.651284 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:15:46.654749 systemd-logind[1570]: Removed session 3. Aug 13 00:15:46.808859 systemd[1]: Started sshd@3-138.201.175.117:22-139.178.89.65:40464.service - OpenSSH per-connection server daemon (139.178.89.65:40464). Aug 13 00:15:47.843449 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 40464 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:47.846638 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:47.859909 systemd-logind[1570]: New session 4 of user core. Aug 13 00:15:47.865825 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:15:48.545799 sshd[1758]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:48.552813 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:15:48.555400 systemd[1]: sshd@3-138.201.175.117:22-139.178.89.65:40464.service: Deactivated successfully. Aug 13 00:15:48.563846 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:15:48.566091 systemd-logind[1570]: Removed session 4. Aug 13 00:15:48.715769 systemd[1]: Started sshd@4-138.201.175.117:22-139.178.89.65:40470.service - OpenSSH per-connection server daemon (139.178.89.65:40470). Aug 13 00:15:49.751046 sshd[1766]: Accepted publickey for core from 139.178.89.65 port 40470 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:49.754236 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:49.768375 systemd-logind[1570]: New session 5 of user core. Aug 13 00:15:49.775946 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:15:50.304190 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:15:50.304922 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:50.329023 sudo[1770]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:50.493397 sshd[1766]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:50.503552 systemd[1]: sshd@4-138.201.175.117:22-139.178.89.65:40470.service: Deactivated successfully. Aug 13 00:15:50.509681 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:15:50.511688 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:15:50.514228 systemd-logind[1570]: Removed session 5. Aug 13 00:15:50.663883 systemd[1]: Started sshd@5-138.201.175.117:22-139.178.89.65:42692.service - OpenSSH per-connection server daemon (139.178.89.65:42692). Aug 13 00:15:51.250122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:15:51.259652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:15:51.562660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:15:51.576065 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:15:51.679740 kubelet[1789]: E0813 00:15:51.679448 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:15:51.688662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:15:51.689309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:15:51.693621 sshd[1775]: Accepted publickey for core from 139.178.89.65 port 42692 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:51.696941 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:51.707627 systemd-logind[1570]: New session 6 of user core. Aug 13 00:15:51.719857 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:15:52.230132 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:15:52.231696 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:52.239882 sudo[1800]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:52.252764 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:15:52.253605 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:52.280824 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:15:52.296156 auditctl[1803]: No rules Aug 13 00:15:52.297087 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:15:52.297660 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:15:52.310031 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:15:52.372131 augenrules[1822]: No rules Aug 13 00:15:52.376748 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:15:52.380935 sudo[1799]: pam_unix(sudo:session): session closed for user root Aug 13 00:15:52.545578 sshd[1775]: pam_unix(sshd:session): session closed for user core Aug 13 00:15:52.553412 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:15:52.554696 systemd[1]: sshd@5-138.201.175.117:22-139.178.89.65:42692.service: Deactivated successfully. Aug 13 00:15:52.561694 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:15:52.566543 systemd-logind[1570]: Removed session 6. Aug 13 00:15:52.727432 systemd[1]: Started sshd@6-138.201.175.117:22-139.178.89.65:42704.service - OpenSSH per-connection server daemon (139.178.89.65:42704). Aug 13 00:15:53.743253 sshd[1831]: Accepted publickey for core from 139.178.89.65 port 42704 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:15:53.746432 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:15:53.756710 systemd-logind[1570]: New session 7 of user core. Aug 13 00:15:53.767870 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:15:54.278160 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:15:54.279071 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:15:54.943717 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:15:54.961236 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:15:55.517300 dockerd[1850]: time="2025-08-13T00:15:55.517150749Z" level=info msg="Starting up" Aug 13 00:15:55.745428 dockerd[1850]: time="2025-08-13T00:15:55.745310862Z" level=info msg="Loading containers: start." Aug 13 00:15:55.950309 kernel: Initializing XFRM netlink socket Aug 13 00:15:56.106551 systemd-networkd[1246]: docker0: Link UP Aug 13 00:15:56.135537 dockerd[1850]: time="2025-08-13T00:15:56.135141303Z" level=info msg="Loading containers: done." Aug 13 00:15:56.165540 dockerd[1850]: time="2025-08-13T00:15:56.164112113Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:15:56.165540 dockerd[1850]: time="2025-08-13T00:15:56.164366866Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:15:56.165540 dockerd[1850]: time="2025-08-13T00:15:56.164632001Z" level=info msg="Daemon has completed initialization" Aug 13 00:15:56.166285 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2484619636-merged.mount: Deactivated successfully. Aug 13 00:15:56.247239 dockerd[1850]: time="2025-08-13T00:15:56.246452952Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:15:56.249275 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:15:57.847587 containerd[1592]: time="2025-08-13T00:15:57.847490161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:15:58.637259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237544929.mount: Deactivated successfully. Aug 13 00:16:00.434255 containerd[1592]: time="2025-08-13T00:16:00.432789749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:00.436149 containerd[1592]: time="2025-08-13T00:16:00.436058748Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651905" Aug 13 00:16:00.437012 containerd[1592]: time="2025-08-13T00:16:00.436950923Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:00.447143 containerd[1592]: time="2025-08-13T00:16:00.447047205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:00.450874 containerd[1592]: time="2025-08-13T00:16:00.450786001Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 2.603189212s" Aug 13 00:16:00.451081 containerd[1592]: time="2025-08-13T00:16:00.450871470Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:16:00.454629 containerd[1592]: time="2025-08-13T00:16:00.454529963Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:16:01.890910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:16:01.902809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:02.235228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:02.263644 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:16:02.395913 kubelet[2059]: E0813 00:16:02.395797 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:16:02.401354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:16:02.401886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:16:02.550968 containerd[1592]: time="2025-08-13T00:16:02.548714335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:02.552621 containerd[1592]: time="2025-08-13T00:16:02.552563124Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460303" Aug 13 00:16:02.553786 containerd[1592]: time="2025-08-13T00:16:02.553696508Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:02.565076 containerd[1592]: time="2025-08-13T00:16:02.564998878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:02.568443 containerd[1592]: time="2025-08-13T00:16:02.568347741Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 2.113705682s" Aug 13 00:16:02.568443 containerd[1592]: time="2025-08-13T00:16:02.568433945Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:16:02.570176 containerd[1592]: time="2025-08-13T00:16:02.569361649Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:16:04.173301 containerd[1592]: time="2025-08-13T00:16:04.172887190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:04.175872 containerd[1592]: time="2025-08-13T00:16:04.175794439Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125109" Aug 13 00:16:04.177835 containerd[1592]: time="2025-08-13T00:16:04.177725239Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:04.185693 containerd[1592]: time="2025-08-13T00:16:04.185602916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:04.189265 containerd[1592]: time="2025-08-13T00:16:04.189011059Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.619574302s" Aug 13 00:16:04.189265 containerd[1592]: time="2025-08-13T00:16:04.189091358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:16:04.192249 containerd[1592]: time="2025-08-13T00:16:04.192154804Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:16:05.799082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842663568.mount: Deactivated successfully. Aug 13 00:16:06.599463 containerd[1592]: time="2025-08-13T00:16:06.599375279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:06.602673 containerd[1592]: time="2025-08-13T00:16:06.602594799Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26916019" Aug 13 00:16:06.603668 containerd[1592]: time="2025-08-13T00:16:06.603582523Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:06.610540 containerd[1592]: time="2025-08-13T00:16:06.610407222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:06.613279 containerd[1592]: time="2025-08-13T00:16:06.612142493Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 2.419885496s" Aug 13 00:16:06.613279 containerd[1592]: time="2025-08-13T00:16:06.612240069Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:16:06.613501 containerd[1592]: time="2025-08-13T00:16:06.613342739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:16:07.311713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2459206849.mount: Deactivated successfully. Aug 13 00:16:08.952010 containerd[1592]: time="2025-08-13T00:16:08.951881496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:08.954257 containerd[1592]: time="2025-08-13T00:16:08.954153888Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Aug 13 00:16:08.970241 containerd[1592]: time="2025-08-13T00:16:08.968064457Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:08.976830 containerd[1592]: time="2025-08-13T00:16:08.976755191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:08.980812 containerd[1592]: time="2025-08-13T00:16:08.980718731Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.367305794s" Aug 13 00:16:08.980812 containerd[1592]: time="2025-08-13T00:16:08.980806453Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:16:08.982007 containerd[1592]: time="2025-08-13T00:16:08.981933845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:16:09.570952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255828621.mount: Deactivated successfully. Aug 13 00:16:09.580627 containerd[1592]: time="2025-08-13T00:16:09.580427572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:09.582658 containerd[1592]: time="2025-08-13T00:16:09.582574297Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Aug 13 00:16:09.586243 containerd[1592]: time="2025-08-13T00:16:09.585053138Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:09.591688 containerd[1592]: time="2025-08-13T00:16:09.591596322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:09.594450 containerd[1592]: time="2025-08-13T00:16:09.594384107Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 612.369903ms" Aug 13 00:16:09.594848 containerd[1592]: time="2025-08-13T00:16:09.594753880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:16:09.597824 containerd[1592]: time="2025-08-13T00:16:09.597408924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:16:10.271483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145073340.mount: Deactivated successfully. Aug 13 00:16:12.640877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:16:12.651586 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:12.979579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:12.993947 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:16:13.099446 kubelet[2203]: E0813 00:16:13.099314 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:16:13.106788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:16:13.107674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:16:13.297370 containerd[1592]: time="2025-08-13T00:16:13.294937629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:13.299273 containerd[1592]: time="2025-08-13T00:16:13.299110388Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406533" Aug 13 00:16:13.299273 containerd[1592]: time="2025-08-13T00:16:13.299158046Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:13.309167 containerd[1592]: time="2025-08-13T00:16:13.309081434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:13.313410 containerd[1592]: time="2025-08-13T00:16:13.313338784Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.715856468s" Aug 13 00:16:13.313659 containerd[1592]: time="2025-08-13T00:16:13.313614687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:16:22.004409 update_engine[1578]: I20250813 00:16:22.004249 1578 update_attempter.cc:509] Updating boot flags... Aug 13 00:16:22.115664 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2238) Aug 13 00:16:22.297232 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2240) Aug 13 00:16:23.142482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:16:23.153789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:23.451609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:23.467004 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:16:23.567708 kubelet[2259]: E0813 00:16:23.567157 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:16:23.575694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:16:23.576074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:16:29.077719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:29.089777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:29.162506 systemd[1]: Reloading requested from client PID 2275 ('systemctl') (unit session-7.scope)... Aug 13 00:16:29.162544 systemd[1]: Reloading... Aug 13 00:16:29.409514 zram_generator::config[2318]: No configuration found. Aug 13 00:16:29.686675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:16:29.891677 systemd[1]: Reloading finished in 728 ms. Aug 13 00:16:29.982832 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:16:29.983118 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:16:29.984029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:29.995826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:30.261834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:30.283088 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:16:30.383385 kubelet[2373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:16:30.383385 kubelet[2373]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:16:30.383385 kubelet[2373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:16:30.384165 kubelet[2373]: I0813 00:16:30.383469 2373 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:16:35.794150 kubelet[2373]: I0813 00:16:35.794074 2373 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:16:35.797234 kubelet[2373]: I0813 00:16:35.795066 2373 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:16:35.797234 kubelet[2373]: I0813 00:16:35.795725 2373 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:16:35.858870 kubelet[2373]: E0813 00:16:35.858782 2373 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.201.175.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:35.862241 kubelet[2373]: I0813 00:16:35.862154 2373 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:16:35.879981 kubelet[2373]: E0813 00:16:35.879919 2373 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:16:35.880448 kubelet[2373]: I0813 00:16:35.880388 2373 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:16:35.888744 kubelet[2373]: I0813 00:16:35.888695 2373 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:16:35.890013 kubelet[2373]: I0813 00:16:35.889975 2373 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:16:35.890637 kubelet[2373]: I0813 00:16:35.890577 2373 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:16:35.891236 kubelet[2373]: I0813 00:16:35.890796 2373 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-0-684996fd0b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:16:35.891655 kubelet[2373]: I0813 00:16:35.891624 2373 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:16:35.891796 kubelet[2373]: I0813 00:16:35.891774 2373 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:16:35.892334 kubelet[2373]: I0813 00:16:35.892302 2373 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:16:35.899457 kubelet[2373]: I0813 00:16:35.899404 2373 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:16:35.900138 kubelet[2373]: I0813 00:16:35.899692 2373 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:16:35.900138 kubelet[2373]: I0813 00:16:35.899747 2373 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:16:35.900138 kubelet[2373]: I0813 00:16:35.899781 2373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:16:35.901486 kubelet[2373]: W0813 00:16:35.901398 2373 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.201.175.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-0-684996fd0b&limit=500&resourceVersion=0": dial tcp 138.201.175.117:6443: connect: connection refused Aug 13 00:16:35.901604 kubelet[2373]: E0813 00:16:35.901519 2373 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.201.175.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-0-684996fd0b&limit=500&resourceVersion=0\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:35.910307 kubelet[2373]: W0813 00:16:35.908980 2373 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.201.175.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.201.175.117:6443: connect: connection refused Aug 13 00:16:35.910307 kubelet[2373]: E0813 00:16:35.909334 2373 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.201.175.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:35.910307 kubelet[2373]: I0813 00:16:35.909693 2373 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:16:35.911727 kubelet[2373]: I0813 00:16:35.911686 2373 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:16:35.912163 kubelet[2373]: W0813 00:16:35.912139 2373 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:16:35.915929 kubelet[2373]: I0813 00:16:35.915886 2373 server.go:1274] "Started kubelet" Aug 13 00:16:35.922142 kubelet[2373]: I0813 00:16:35.922058 2373 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:16:35.923664 kubelet[2373]: I0813 00:16:35.923573 2373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:16:35.924432 kubelet[2373]: I0813 00:16:35.924397 2373 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:16:35.924664 kubelet[2373]: I0813 00:16:35.924487 2373 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:16:35.929632 kubelet[2373]: I0813 00:16:35.929558 2373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:16:35.932300 kubelet[2373]: E0813 00:16:35.928706 2373 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.201.175.117:6443/api/v1/namespaces/default/events\": dial tcp 138.201.175.117:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-0-684996fd0b.185b2b65c6aca963 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-0-684996fd0b,UID:ci-4081-3-5-0-684996fd0b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-0-684996fd0b,},FirstTimestamp:2025-08-13 00:16:35.915835747 +0000 UTC m=+5.625820133,LastTimestamp:2025-08-13 00:16:35.915835747 +0000 UTC m=+5.625820133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-0-684996fd0b,}" Aug 13 00:16:35.936983 kubelet[2373]: I0813 00:16:35.936936 2373 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:16:35.941071 kubelet[2373]: I0813 00:16:35.941015 2373 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:16:35.951248 kubelet[2373]: E0813 00:16:35.951147 2373 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:16:35.955894 kubelet[2373]: E0813 00:16:35.947039 2373 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-0-684996fd0b\" not found" Aug 13 00:16:35.955894 kubelet[2373]: I0813 00:16:35.941457 2373 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:16:35.955894 kubelet[2373]: I0813 00:16:35.955165 2373 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:16:35.955894 kubelet[2373]: W0813 00:16:35.955610 2373 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.201.175.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.201.175.117:6443: connect: connection refused Aug 13 00:16:35.957116 kubelet[2373]: E0813 00:16:35.956368 2373 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.201.175.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:35.957316 kubelet[2373]: E0813 00:16:35.957099 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.175.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-0-684996fd0b?timeout=10s\": dial tcp 138.201.175.117:6443: connect: connection refused" interval="200ms" Aug 13 00:16:35.960789 kubelet[2373]: I0813 00:16:35.960407 2373 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:16:35.960789 kubelet[2373]: I0813 00:16:35.960455 2373 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:16:35.960789 kubelet[2373]: I0813 00:16:35.960713 2373 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:16:35.992861 kubelet[2373]: I0813 00:16:35.992608 2373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:16:35.997896 kubelet[2373]: I0813 00:16:35.997281 2373 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:16:35.997896 kubelet[2373]: I0813 00:16:35.997334 2373 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:16:35.997896 kubelet[2373]: I0813 00:16:35.997369 2373 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:16:35.997896 kubelet[2373]: E0813 00:16:35.997450 2373 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:16:36.007620 kubelet[2373]: W0813 00:16:36.007519 2373 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.201.175.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.201.175.117:6443: connect: connection refused Aug 13 00:16:36.009046 kubelet[2373]: E0813 00:16:36.008990 2373 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.201.175.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:36.038154 kubelet[2373]: I0813 00:16:36.038052 2373 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:16:36.038154 kubelet[2373]: I0813 00:16:36.038100 2373 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:16:36.038154 kubelet[2373]: I0813 00:16:36.038139 2373 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:16:36.040884 kubelet[2373]: I0813 00:16:36.040817 2373 policy_none.go:49] "None policy: Start" Aug 13 00:16:36.042598 kubelet[2373]: I0813 00:16:36.042438 2373 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:16:36.043299 kubelet[2373]: I0813 00:16:36.042765 2373 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:16:36.052310 kubelet[2373]: I0813 00:16:36.052078 2373 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:16:36.054250 kubelet[2373]: I0813 00:16:36.052519 2373 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:16:36.054250 kubelet[2373]: I0813 00:16:36.052560 2373 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:16:36.056174 kubelet[2373]: I0813 00:16:36.056117 2373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:16:36.061249 kubelet[2373]: E0813 00:16:36.061121 2373 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-0-684996fd0b\" not found" Aug 13 00:16:36.156269 kubelet[2373]: I0813 00:16:36.156134 2373 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.157173 kubelet[2373]: E0813 00:16:36.157071 2373 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.201.175.117:6443/api/v1/nodes\": dial tcp 138.201.175.117:6443: connect: connection refused" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.158662 kubelet[2373]: E0813 00:16:36.158548 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.175.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-0-684996fd0b?timeout=10s\": dial tcp 138.201.175.117:6443: connect: connection refused" interval="400ms" Aug 13 00:16:36.256954 kubelet[2373]: I0813 00:16:36.256369 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.256954 kubelet[2373]: I0813 00:16:36.256489 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.256954 kubelet[2373]: I0813 00:16:36.256539 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.256954 kubelet[2373]: I0813 00:16:36.256586 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54fc0ea4b93ab5b1231ab715c0904ff2-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-0-684996fd0b\" (UID: \"54fc0ea4b93ab5b1231ab715c0904ff2\") " pod="kube-system/kube-apiserver-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.256954 kubelet[2373]: I0813 00:16:36.256634 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54fc0ea4b93ab5b1231ab715c0904ff2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-0-684996fd0b\" (UID: \"54fc0ea4b93ab5b1231ab715c0904ff2\") " pod="kube-system/kube-apiserver-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.257562 kubelet[2373]: I0813 00:16:36.256674 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.257562 kubelet[2373]: I0813 00:16:36.256711 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.257562 kubelet[2373]: I0813 00:16:36.256759 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dce4ca6ee05e13835ac167a14c772c7-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-0-684996fd0b\" (UID: \"0dce4ca6ee05e13835ac167a14c772c7\") " pod="kube-system/kube-scheduler-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.257562 kubelet[2373]: I0813 00:16:36.256798 2373 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54fc0ea4b93ab5b1231ab715c0904ff2-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-0-684996fd0b\" (UID: \"54fc0ea4b93ab5b1231ab715c0904ff2\") " pod="kube-system/kube-apiserver-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.360536 kubelet[2373]: I0813 00:16:36.360338 2373 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.361468 kubelet[2373]: E0813 00:16:36.361378 2373 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.201.175.117:6443/api/v1/nodes\": dial tcp 138.201.175.117:6443: connect: connection refused" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.421826 containerd[1592]: time="2025-08-13T00:16:36.421567312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-0-684996fd0b,Uid:54fc0ea4b93ab5b1231ab715c0904ff2,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:36.421826 containerd[1592]: time="2025-08-13T00:16:36.421608037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-0-684996fd0b,Uid:0dce4ca6ee05e13835ac167a14c772c7,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:36.430326 containerd[1592]: time="2025-08-13T00:16:36.430120941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-0-684996fd0b,Uid:5318c4ed7dd9b92dfe3b9bc42da36778,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:36.559878 kubelet[2373]: E0813 00:16:36.559791 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.175.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-0-684996fd0b?timeout=10s\": dial tcp 138.201.175.117:6443: connect: connection refused" interval="800ms" Aug 13 00:16:36.765790 kubelet[2373]: I0813 00:16:36.765642 2373 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.766877 kubelet[2373]: E0813 00:16:36.766817 2373 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.201.175.117:6443/api/v1/nodes\": dial tcp 138.201.175.117:6443: connect: connection refused" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:36.769800 kubelet[2373]: W0813 00:16:36.769664 2373 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.201.175.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.201.175.117:6443: connect: connection refused Aug 13 00:16:36.769800 kubelet[2373]: E0813 00:16:36.769739 2373 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.201.175.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:36.918845 kubelet[2373]: W0813 00:16:36.918706 2373 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.201.175.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-0-684996fd0b&limit=500&resourceVersion=0": dial tcp 138.201.175.117:6443: connect: connection refused Aug 13 00:16:36.918845 kubelet[2373]: E0813 00:16:36.918846 2373 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.201.175.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-0-684996fd0b&limit=500&resourceVersion=0\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:36.927827 kubelet[2373]: W0813 00:16:36.927739 2373 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.201.175.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.201.175.117:6443: connect: connection refused Aug 13 00:16:36.927827 kubelet[2373]: E0813 00:16:36.927832 2373 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.201.175.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:37.042356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878048503.mount: Deactivated successfully. Aug 13 00:16:37.055120 containerd[1592]: time="2025-08-13T00:16:37.054454470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:16:37.061131 containerd[1592]: time="2025-08-13T00:16:37.061013460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Aug 13 00:16:37.062784 containerd[1592]: time="2025-08-13T00:16:37.062630094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:16:37.065249 containerd[1592]: time="2025-08-13T00:16:37.065112393Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:16:37.069232 containerd[1592]: time="2025-08-13T00:16:37.068978779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:16:37.072243 containerd[1592]: time="2025-08-13T00:16:37.071428754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:16:37.072243 containerd[1592]: time="2025-08-13T00:16:37.071602495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:16:37.078730 containerd[1592]: time="2025-08-13T00:16:37.078648423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:16:37.085525 containerd[1592]: time="2025-08-13T00:16:37.085430279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 662.612612ms" Aug 13 00:16:37.092085 containerd[1592]: time="2025-08-13T00:16:37.091866094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 661.47732ms" Aug 13 00:16:37.098562 containerd[1592]: time="2025-08-13T00:16:37.098480210Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 676.742438ms" Aug 13 00:16:37.362735 kubelet[2373]: E0813 00:16:37.362495 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.201.175.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-0-684996fd0b?timeout=10s\": dial tcp 138.201.175.117:6443: connect: connection refused" interval="1.6s" Aug 13 00:16:37.378249 containerd[1592]: time="2025-08-13T00:16:37.377815720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:37.378249 containerd[1592]: time="2025-08-13T00:16:37.377929974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:37.378249 containerd[1592]: time="2025-08-13T00:16:37.377972619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:37.380389 containerd[1592]: time="2025-08-13T00:16:37.378685585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:37.381800 containerd[1592]: time="2025-08-13T00:16:37.379858726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:37.383701 containerd[1592]: time="2025-08-13T00:16:37.383566413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:37.383701 containerd[1592]: time="2025-08-13T00:16:37.383630780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:37.384634 containerd[1592]: time="2025-08-13T00:16:37.384451799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:37.387630 containerd[1592]: time="2025-08-13T00:16:37.387101398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:37.387630 containerd[1592]: time="2025-08-13T00:16:37.387245736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:37.387630 containerd[1592]: time="2025-08-13T00:16:37.387340507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:37.388860 containerd[1592]: time="2025-08-13T00:16:37.388394314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:37.401714 kubelet[2373]: W0813 00:16:37.401508 2373 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.201.175.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.201.175.117:6443: connect: connection refused Aug 13 00:16:37.401714 kubelet[2373]: E0813 00:16:37.401656 2373 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.201.175.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:37.536077 containerd[1592]: time="2025-08-13T00:16:37.534815942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-0-684996fd0b,Uid:5318c4ed7dd9b92dfe3b9bc42da36778,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5406bf23184dc632aa1d3b9e13ce2a453547922315da953a653331b161c0b69\"" Aug 13 00:16:37.550440 containerd[1592]: time="2025-08-13T00:16:37.549143587Z" level=info msg="CreateContainer within sandbox \"c5406bf23184dc632aa1d3b9e13ce2a453547922315da953a653331b161c0b69\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:16:37.572531 kubelet[2373]: I0813 00:16:37.572482 2373 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:37.573859 kubelet[2373]: E0813 00:16:37.573556 2373 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.201.175.117:6443/api/v1/nodes\": dial tcp 138.201.175.117:6443: connect: connection refused" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:37.585613 containerd[1592]: time="2025-08-13T00:16:37.585538169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-0-684996fd0b,Uid:54fc0ea4b93ab5b1231ab715c0904ff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"30dfe353e145595fab1a9ae4c0544a7eccc194aa11c381af78399e632a54035e\"" Aug 13 00:16:37.597291 containerd[1592]: time="2025-08-13T00:16:37.596496808Z" level=info msg="CreateContainer within sandbox \"c5406bf23184dc632aa1d3b9e13ce2a453547922315da953a653331b161c0b69\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d98a0e068533b4d82d2e1b45c9cc7819abab60d6dc5f15ac8762508901a813e4\"" Aug 13 00:16:37.598025 containerd[1592]: time="2025-08-13T00:16:37.597237257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-0-684996fd0b,Uid:0dce4ca6ee05e13835ac167a14c772c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2661af3b8d712686f3fe9378aa8b8f69a0fd4e1793541f5bead6a4ade999d7ad\"" Aug 13 00:16:37.602077 containerd[1592]: time="2025-08-13T00:16:37.602018193Z" level=info msg="CreateContainer within sandbox \"30dfe353e145595fab1a9ae4c0544a7eccc194aa11c381af78399e632a54035e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:16:37.603872 containerd[1592]: time="2025-08-13T00:16:37.602240900Z" level=info msg="StartContainer for \"d98a0e068533b4d82d2e1b45c9cc7819abab60d6dc5f15ac8762508901a813e4\"" Aug 13 00:16:37.623892 containerd[1592]: time="2025-08-13T00:16:37.623393206Z" level=info msg="CreateContainer within sandbox \"2661af3b8d712686f3fe9378aa8b8f69a0fd4e1793541f5bead6a4ade999d7ad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:16:37.657122 containerd[1592]: time="2025-08-13T00:16:37.656782506Z" level=info msg="CreateContainer within sandbox \"30dfe353e145595fab1a9ae4c0544a7eccc194aa11c381af78399e632a54035e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0546755470fb386aeb34bca8a885bdab08b254e4bf686ba139206c61071de162\"" Aug 13 00:16:37.658597 containerd[1592]: time="2025-08-13T00:16:37.658510634Z" level=info msg="StartContainer for \"0546755470fb386aeb34bca8a885bdab08b254e4bf686ba139206c61071de162\"" Aug 13 00:16:37.663618 containerd[1592]: time="2025-08-13T00:16:37.663542000Z" level=info msg="CreateContainer within sandbox \"2661af3b8d712686f3fe9378aa8b8f69a0fd4e1793541f5bead6a4ade999d7ad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"623b74f3d669072dec7eae00464c0a21fd5afc9439176f5ba989f65ec8211703\"" Aug 13 00:16:37.667256 containerd[1592]: time="2025-08-13T00:16:37.665648693Z" level=info msg="StartContainer for \"623b74f3d669072dec7eae00464c0a21fd5afc9439176f5ba989f65ec8211703\"" Aug 13 00:16:37.859305 containerd[1592]: time="2025-08-13T00:16:37.857918121Z" level=info msg="StartContainer for \"d98a0e068533b4d82d2e1b45c9cc7819abab60d6dc5f15ac8762508901a813e4\" returns successfully" Aug 13 00:16:37.885673 containerd[1592]: time="2025-08-13T00:16:37.885458077Z" level=info msg="StartContainer for \"623b74f3d669072dec7eae00464c0a21fd5afc9439176f5ba989f65ec8211703\" returns successfully" Aug 13 00:16:37.923319 kubelet[2373]: E0813 00:16:37.923260 2373 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.201.175.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.201.175.117:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:16:37.963047 containerd[1592]: time="2025-08-13T00:16:37.962269644Z" level=info msg="StartContainer for \"0546755470fb386aeb34bca8a885bdab08b254e4bf686ba139206c61071de162\" returns successfully" Aug 13 00:16:39.180284 kubelet[2373]: I0813 00:16:39.180161 2373 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:43.907809 kubelet[2373]: I0813 00:16:43.907345 2373 apiserver.go:52] "Watching apiserver" Aug 13 00:16:44.014838 kubelet[2373]: E0813 00:16:44.014709 2373 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-0-684996fd0b\" not found" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:44.036759 kubelet[2373]: I0813 00:16:44.036618 2373 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:44.036759 kubelet[2373]: E0813 00:16:44.036738 2373 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-0-684996fd0b\": node \"ci-4081-3-5-0-684996fd0b\" not found" Aug 13 00:16:44.058222 kubelet[2373]: I0813 00:16:44.056475 2373 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:16:44.126553 kubelet[2373]: E0813 00:16:44.126335 2373 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-0-684996fd0b.185b2b65c6aca963 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-0-684996fd0b,UID:ci-4081-3-5-0-684996fd0b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-0-684996fd0b,},FirstTimestamp:2025-08-13 00:16:35.915835747 +0000 UTC m=+5.625820133,LastTimestamp:2025-08-13 00:16:35.915835747 +0000 UTC m=+5.625820133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-0-684996fd0b,}" Aug 13 00:16:44.283347 kubelet[2373]: E0813 00:16:44.281037 2373 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-5-0-684996fd0b.185b2b65c77eb7b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-0-684996fd0b,UID:ci-4081-3-5-0-684996fd0b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-0-684996fd0b,},FirstTimestamp:2025-08-13 00:16:35.929601974 +0000 UTC m=+5.639586440,LastTimestamp:2025-08-13 00:16:35.929601974 +0000 UTC m=+5.639586440,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-0-684996fd0b,}" Aug 13 00:16:46.159534 kubelet[2373]: I0813 00:16:46.158008 2373 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-0-684996fd0b" podStartSLOduration=2.15797507 podStartE2EDuration="2.15797507s" podCreationTimestamp="2025-08-13 00:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:46.157816696 +0000 UTC m=+15.867801082" watchObservedRunningTime="2025-08-13 00:16:46.15797507 +0000 UTC m=+15.867959456" Aug 13 00:16:46.818576 systemd[1]: Reloading requested from client PID 2646 ('systemctl') (unit session-7.scope)... Aug 13 00:16:46.818614 systemd[1]: Reloading... Aug 13 00:16:47.229269 zram_generator::config[2686]: No configuration found. Aug 13 00:16:47.765946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:16:48.082160 systemd[1]: Reloading finished in 1262 ms. Aug 13 00:16:48.182151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:48.210521 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:16:48.211151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:48.229962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:16:48.679593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:16:48.710243 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:16:48.986481 kubelet[2741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:16:48.986481 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:16:48.986481 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:16:48.987190 kubelet[2741]: I0813 00:16:48.986580 2741 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:16:49.023642 kubelet[2741]: I0813 00:16:49.022548 2741 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:16:49.023642 kubelet[2741]: I0813 00:16:49.022623 2741 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:16:49.026307 kubelet[2741]: I0813 00:16:49.026248 2741 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:16:49.048699 kubelet[2741]: I0813 00:16:49.047531 2741 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:16:49.059225 kubelet[2741]: I0813 00:16:49.057139 2741 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:16:49.081760 kubelet[2741]: E0813 00:16:49.081347 2741 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:16:49.081760 kubelet[2741]: I0813 00:16:49.081422 2741 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:16:49.100772 kubelet[2741]: I0813 00:16:49.100714 2741 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:16:49.106264 kubelet[2741]: I0813 00:16:49.105679 2741 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:16:49.106264 kubelet[2741]: I0813 00:16:49.105945 2741 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:16:49.106665 kubelet[2741]: I0813 00:16:49.105997 2741 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-0-684996fd0b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:16:49.106665 kubelet[2741]: I0813 00:16:49.106419 2741 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:16:49.106665 kubelet[2741]: I0813 00:16:49.106442 2741 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:16:49.106665 kubelet[2741]: I0813 00:16:49.106539 2741 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:16:49.107348 kubelet[2741]: I0813 00:16:49.106777 2741 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:16:49.107348 kubelet[2741]: I0813 00:16:49.106804 2741 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:16:49.107348 kubelet[2741]: I0813 00:16:49.106845 2741 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:16:49.107348 kubelet[2741]: I0813 00:16:49.106874 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:16:49.138492 kubelet[2741]: I0813 00:16:49.138419 2741 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:16:49.147862 kubelet[2741]: I0813 00:16:49.147791 2741 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:16:49.164233 kubelet[2741]: I0813 00:16:49.159409 2741 server.go:1274] "Started kubelet" Aug 13 00:16:49.183783 kubelet[2741]: I0813 00:16:49.182432 2741 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:16:49.194959 kubelet[2741]: I0813 00:16:49.193070 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:16:49.199356 kubelet[2741]: I0813 00:16:49.199172 2741 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:16:49.206872 kubelet[2741]: I0813 00:16:49.202370 2741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:16:49.206872 kubelet[2741]: I0813 00:16:49.202949 2741 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:16:49.219232 kubelet[2741]: I0813 00:16:49.217849 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:16:49.244247 kubelet[2741]: I0813 00:16:49.237106 2741 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:16:49.249235 kubelet[2741]: E0813 00:16:49.245725 2741 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-0-684996fd0b\" not found" Aug 13 00:16:49.249235 kubelet[2741]: I0813 00:16:49.247044 2741 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:16:49.249235 kubelet[2741]: I0813 00:16:49.247812 2741 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:16:49.349246 kubelet[2741]: I0813 00:16:49.341465 2741 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:16:49.351978 kubelet[2741]: I0813 00:16:49.351907 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:16:49.358246 kubelet[2741]: I0813 00:16:49.357111 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:16:49.358246 kubelet[2741]: I0813 00:16:49.357167 2741 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:16:49.358246 kubelet[2741]: I0813 00:16:49.357228 2741 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:16:49.358621 kubelet[2741]: E0813 00:16:49.358348 2741 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:16:49.358772 kubelet[2741]: E0813 00:16:49.358655 2741 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-0-684996fd0b\" not found" Aug 13 00:16:49.510926 kubelet[2741]: E0813 00:16:49.509807 2741 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:16:49.514813 kubelet[2741]: I0813 00:16:49.512425 2741 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:16:49.515127 kubelet[2741]: I0813 00:16:49.515095 2741 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:16:49.553641 kubelet[2741]: E0813 00:16:49.513059 2741 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:16:49.760787 kubelet[2741]: E0813 00:16:49.760435 2741 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:16:49.989971 kubelet[2741]: I0813 00:16:49.989929 2741 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:16:49.991164 kubelet[2741]: I0813 00:16:49.991060 2741 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:16:49.991926 kubelet[2741]: I0813 00:16:49.991660 2741 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:16:49.994509 kubelet[2741]: I0813 00:16:49.993047 2741 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:16:49.994509 kubelet[2741]: I0813 00:16:49.993077 2741 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:16:49.994509 kubelet[2741]: I0813 00:16:49.993118 2741 policy_none.go:49] "None policy: Start" Aug 13 00:16:50.001146 kubelet[2741]: I0813 00:16:49.998599 2741 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:16:50.001146 kubelet[2741]: I0813 00:16:49.998654 2741 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:16:50.001146 kubelet[2741]: I0813 00:16:49.998961 2741 state_mem.go:75] "Updated machine memory state" Aug 13 00:16:50.013095 kubelet[2741]: I0813 00:16:50.012265 2741 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:16:50.013095 kubelet[2741]: I0813 00:16:50.012659 2741 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:16:50.014782 kubelet[2741]: I0813 00:16:50.014600 2741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:16:50.018668 kubelet[2741]: I0813 00:16:50.018629 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:16:50.126225 kubelet[2741]: I0813 00:16:50.122055 2741 apiserver.go:52] "Watching apiserver" Aug 13 00:16:50.149122 kubelet[2741]: I0813 00:16:50.149051 2741 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.194030 kubelet[2741]: I0813 00:16:50.193782 2741 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.194030 kubelet[2741]: I0813 00:16:50.193944 2741 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.220114 kubelet[2741]: E0813 00:16:50.209159 2741 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.256300 kubelet[2741]: I0813 00:16:50.253167 2741 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:16:50.275180 kubelet[2741]: I0813 00:16:50.274398 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54fc0ea4b93ab5b1231ab715c0904ff2-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-0-684996fd0b\" (UID: \"54fc0ea4b93ab5b1231ab715c0904ff2\") " pod="kube-system/kube-apiserver-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.275180 kubelet[2741]: I0813 00:16:50.274491 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54fc0ea4b93ab5b1231ab715c0904ff2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-0-684996fd0b\" (UID: \"54fc0ea4b93ab5b1231ab715c0904ff2\") " pod="kube-system/kube-apiserver-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.275180 kubelet[2741]: I0813 00:16:50.274541 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.275180 kubelet[2741]: I0813 00:16:50.274683 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.275180 kubelet[2741]: I0813 00:16:50.274733 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.275706 kubelet[2741]: I0813 00:16:50.274773 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.275706 kubelet[2741]: I0813 00:16:50.274814 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dce4ca6ee05e13835ac167a14c772c7-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-0-684996fd0b\" (UID: \"0dce4ca6ee05e13835ac167a14c772c7\") " pod="kube-system/kube-scheduler-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.275706 kubelet[2741]: I0813 00:16:50.274850 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54fc0ea4b93ab5b1231ab715c0904ff2-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-0-684996fd0b\" (UID: \"54fc0ea4b93ab5b1231ab715c0904ff2\") " pod="kube-system/kube-apiserver-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.275706 kubelet[2741]: I0813 00:16:50.274888 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5318c4ed7dd9b92dfe3b9bc42da36778-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-0-684996fd0b\" (UID: \"5318c4ed7dd9b92dfe3b9bc42da36778\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" Aug 13 00:16:50.296225 kubelet[2741]: I0813 00:16:50.292372 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-0-684996fd0b" podStartSLOduration=4.29234314 podStartE2EDuration="4.29234314s" podCreationTimestamp="2025-08-13 00:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:50.290997351 +0000 UTC m=+1.558721995" watchObservedRunningTime="2025-08-13 00:16:50.29234314 +0000 UTC m=+1.560067744" Aug 13 00:16:50.518364 kubelet[2741]: I0813 00:16:50.517282 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-0-684996fd0b" podStartSLOduration=0.51725197 podStartE2EDuration="517.25197ms" podCreationTimestamp="2025-08-13 00:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:50.516721247 +0000 UTC m=+1.784445891" watchObservedRunningTime="2025-08-13 00:16:50.51725197 +0000 UTC m=+1.784976614" Aug 13 00:16:54.075912 kubelet[2741]: I0813 00:16:54.075597 2741 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:16:54.078362 containerd[1592]: time="2025-08-13T00:16:54.077789802Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:16:54.079059 kubelet[2741]: I0813 00:16:54.078351 2741 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:16:55.123359 kubelet[2741]: I0813 00:16:55.123293 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6624da7-e5b8-45ce-9326-884f7a05691e-lib-modules\") pod \"kube-proxy-2phqv\" (UID: \"f6624da7-e5b8-45ce-9326-884f7a05691e\") " pod="kube-system/kube-proxy-2phqv" Aug 13 00:16:55.124454 kubelet[2741]: I0813 00:16:55.123369 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm49q\" (UniqueName: \"kubernetes.io/projected/f6624da7-e5b8-45ce-9326-884f7a05691e-kube-api-access-gm49q\") pod \"kube-proxy-2phqv\" (UID: \"f6624da7-e5b8-45ce-9326-884f7a05691e\") " pod="kube-system/kube-proxy-2phqv" Aug 13 00:16:55.124454 kubelet[2741]: I0813 00:16:55.123418 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6624da7-e5b8-45ce-9326-884f7a05691e-kube-proxy\") pod \"kube-proxy-2phqv\" (UID: \"f6624da7-e5b8-45ce-9326-884f7a05691e\") " pod="kube-system/kube-proxy-2phqv" Aug 13 00:16:55.124454 kubelet[2741]: I0813 00:16:55.123462 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6624da7-e5b8-45ce-9326-884f7a05691e-xtables-lock\") pod \"kube-proxy-2phqv\" (UID: \"f6624da7-e5b8-45ce-9326-884f7a05691e\") " pod="kube-system/kube-proxy-2phqv" Aug 13 00:16:55.312395 containerd[1592]: time="2025-08-13T00:16:55.312289021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2phqv,Uid:f6624da7-e5b8-45ce-9326-884f7a05691e,Namespace:kube-system,Attempt:0,}" Aug 13 00:16:55.324834 kubelet[2741]: I0813 00:16:55.324607 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9553a641-fa96-439a-9a28-63530f48ce8d-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-npxrt\" (UID: \"9553a641-fa96-439a-9a28-63530f48ce8d\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-npxrt" Aug 13 00:16:55.324834 kubelet[2741]: I0813 00:16:55.324704 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vvkz\" (UniqueName: \"kubernetes.io/projected/9553a641-fa96-439a-9a28-63530f48ce8d-kube-api-access-5vvkz\") pod \"tigera-operator-5bf8dfcb4-npxrt\" (UID: \"9553a641-fa96-439a-9a28-63530f48ce8d\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-npxrt" Aug 13 00:16:55.375124 containerd[1592]: time="2025-08-13T00:16:55.373645327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:55.375124 containerd[1592]: time="2025-08-13T00:16:55.373755175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:55.375124 containerd[1592]: time="2025-08-13T00:16:55.373816700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:55.375124 containerd[1592]: time="2025-08-13T00:16:55.374484189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:55.494096 containerd[1592]: time="2025-08-13T00:16:55.493225029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2phqv,Uid:f6624da7-e5b8-45ce-9326-884f7a05691e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3ad1268acb550a851c27f294cba4a15055744e7bd35641a0f23563c3490058d\"" Aug 13 00:16:55.505303 containerd[1592]: time="2025-08-13T00:16:55.505218830Z" level=info msg="CreateContainer within sandbox \"c3ad1268acb550a851c27f294cba4a15055744e7bd35641a0f23563c3490058d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:16:55.537318 containerd[1592]: time="2025-08-13T00:16:55.535914885Z" level=info msg="CreateContainer within sandbox \"c3ad1268acb550a851c27f294cba4a15055744e7bd35641a0f23563c3490058d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0fdf6f9bf15954a63d1dabbd4a883215f627d179d4d38646f6a11150f4d231b7\"" Aug 13 00:16:55.539245 containerd[1592]: time="2025-08-13T00:16:55.537769341Z" level=info msg="StartContainer for \"0fdf6f9bf15954a63d1dabbd4a883215f627d179d4d38646f6a11150f4d231b7\"" Aug 13 00:16:55.569023 containerd[1592]: time="2025-08-13T00:16:55.568917988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-npxrt,Uid:9553a641-fa96-439a-9a28-63530f48ce8d,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:16:55.658494 containerd[1592]: time="2025-08-13T00:16:55.657247035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:16:55.659359 containerd[1592]: time="2025-08-13T00:16:55.659099331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:16:55.659359 containerd[1592]: time="2025-08-13T00:16:55.659163376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:55.660083 containerd[1592]: time="2025-08-13T00:16:55.659927992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:16:55.723944 containerd[1592]: time="2025-08-13T00:16:55.723859047Z" level=info msg="StartContainer for \"0fdf6f9bf15954a63d1dabbd4a883215f627d179d4d38646f6a11150f4d231b7\" returns successfully" Aug 13 00:16:55.838864 containerd[1592]: time="2025-08-13T00:16:55.838184844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-npxrt,Uid:9553a641-fa96-439a-9a28-63530f48ce8d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"368b49ced91fe757d296057a2fa3023fd7a49a5f2371854687bad37cdfcf4bc5\"" Aug 13 00:16:55.844828 containerd[1592]: time="2025-08-13T00:16:55.844244489Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:16:56.760595 kubelet[2741]: I0813 00:16:56.760352 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2phqv" podStartSLOduration=2.7603158309999998 podStartE2EDuration="2.760315831s" podCreationTimestamp="2025-08-13 00:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:16:56.756703411 +0000 UTC m=+8.024428095" watchObservedRunningTime="2025-08-13 00:16:56.760315831 +0000 UTC m=+8.028040595" Aug 13 00:16:57.519766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239699791.mount: Deactivated successfully. Aug 13 00:16:58.718107 containerd[1592]: time="2025-08-13T00:16:58.718030797Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:58.720469 containerd[1592]: time="2025-08-13T00:16:58.720052818Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:16:58.722987 containerd[1592]: time="2025-08-13T00:16:58.722879495Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:58.740027 containerd[1592]: time="2025-08-13T00:16:58.739512494Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:16:58.743024 containerd[1592]: time="2025-08-13T00:16:58.742925452Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.898594717s" Aug 13 00:16:58.743024 containerd[1592]: time="2025-08-13T00:16:58.743021819Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:16:58.750461 containerd[1592]: time="2025-08-13T00:16:58.750195678Z" level=info msg="CreateContainer within sandbox \"368b49ced91fe757d296057a2fa3023fd7a49a5f2371854687bad37cdfcf4bc5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:16:58.796469 containerd[1592]: time="2025-08-13T00:16:58.796307971Z" level=info msg="CreateContainer within sandbox \"368b49ced91fe757d296057a2fa3023fd7a49a5f2371854687bad37cdfcf4bc5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"479936b9cff5b71946c78930edcf573377977b51ca9b45e8382ab2f71813f28d\"" Aug 13 00:16:58.798472 containerd[1592]: time="2025-08-13T00:16:58.798326912Z" level=info msg="StartContainer for \"479936b9cff5b71946c78930edcf573377977b51ca9b45e8382ab2f71813f28d\"" Aug 13 00:16:58.863928 systemd[1]: run-containerd-runc-k8s.io-479936b9cff5b71946c78930edcf573377977b51ca9b45e8382ab2f71813f28d-runc.5zWtnt.mount: Deactivated successfully. Aug 13 00:16:58.931809 containerd[1592]: time="2025-08-13T00:16:58.929014178Z" level=info msg="StartContainer for \"479936b9cff5b71946c78930edcf573377977b51ca9b45e8382ab2f71813f28d\" returns successfully" Aug 13 00:16:59.778987 kubelet[2741]: I0813 00:16:59.778860 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-npxrt" podStartSLOduration=1.874534693 podStartE2EDuration="4.778827613s" podCreationTimestamp="2025-08-13 00:16:55 +0000 UTC" firstStartedPulling="2025-08-13 00:16:55.842527923 +0000 UTC m=+7.110252527" lastFinishedPulling="2025-08-13 00:16:58.746820843 +0000 UTC m=+10.014545447" observedRunningTime="2025-08-13 00:16:59.776512814 +0000 UTC m=+11.044237458" watchObservedRunningTime="2025-08-13 00:16:59.778827613 +0000 UTC m=+11.046552217" Aug 13 00:17:08.421639 sudo[1835]: pam_unix(sudo:session): session closed for user root Aug 13 00:17:08.597542 sshd[1831]: pam_unix(sshd:session): session closed for user core Aug 13 00:17:08.618027 systemd[1]: sshd@6-138.201.175.117:22-139.178.89.65:42704.service: Deactivated successfully. Aug 13 00:17:08.633861 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:17:08.635379 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:17:08.647519 systemd-logind[1570]: Removed session 7. Aug 13 00:17:28.376180 kubelet[2741]: I0813 00:17:28.375299 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/12c00afb-e8c8-4968-8c72-3ab4f46148d0-typha-certs\") pod \"calico-typha-749d786678-c289n\" (UID: \"12c00afb-e8c8-4968-8c72-3ab4f46148d0\") " pod="calico-system/calico-typha-749d786678-c289n" Aug 13 00:17:28.376180 kubelet[2741]: I0813 00:17:28.375420 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12c00afb-e8c8-4968-8c72-3ab4f46148d0-tigera-ca-bundle\") pod \"calico-typha-749d786678-c289n\" (UID: \"12c00afb-e8c8-4968-8c72-3ab4f46148d0\") " pod="calico-system/calico-typha-749d786678-c289n" Aug 13 00:17:28.376180 kubelet[2741]: I0813 00:17:28.375474 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlz8p\" (UniqueName: \"kubernetes.io/projected/12c00afb-e8c8-4968-8c72-3ab4f46148d0-kube-api-access-hlz8p\") pod \"calico-typha-749d786678-c289n\" (UID: \"12c00afb-e8c8-4968-8c72-3ab4f46148d0\") " pod="calico-system/calico-typha-749d786678-c289n" Aug 13 00:17:28.779135 kubelet[2741]: I0813 00:17:28.778947 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-policysync\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.779135 kubelet[2741]: I0813 00:17:28.779041 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-var-run-calico\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.779135 kubelet[2741]: I0813 00:17:28.779089 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-flexvol-driver-host\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.779135 kubelet[2741]: I0813 00:17:28.779137 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcd578c6-953e-4029-ba4c-02d865bd7730-tigera-ca-bundle\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.779621 kubelet[2741]: I0813 00:17:28.779180 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-var-lib-calico\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.779621 kubelet[2741]: I0813 00:17:28.779248 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-xtables-lock\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.779621 kubelet[2741]: I0813 00:17:28.779292 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-cni-bin-dir\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.779621 kubelet[2741]: I0813 00:17:28.779329 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-cni-net-dir\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.779621 kubelet[2741]: I0813 00:17:28.779368 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fcd578c6-953e-4029-ba4c-02d865bd7730-node-certs\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.780136 kubelet[2741]: I0813 00:17:28.779406 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvj2\" (UniqueName: \"kubernetes.io/projected/fcd578c6-953e-4029-ba4c-02d865bd7730-kube-api-access-kwvj2\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.780136 kubelet[2741]: I0813 00:17:28.779444 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-lib-modules\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.780136 kubelet[2741]: I0813 00:17:28.779484 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fcd578c6-953e-4029-ba4c-02d865bd7730-cni-log-dir\") pod \"calico-node-v65z4\" (UID: \"fcd578c6-953e-4029-ba4c-02d865bd7730\") " pod="calico-system/calico-node-v65z4" Aug 13 00:17:28.839602 containerd[1592]: time="2025-08-13T00:17:28.839499399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-749d786678-c289n,Uid:12c00afb-e8c8-4968-8c72-3ab4f46148d0,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:28.872508 kubelet[2741]: E0813 00:17:28.871873 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hw4j" podUID="851fc6af-b9af-4d67-92e5-4dcf6cbec03a" Aug 13 00:17:28.925065 kubelet[2741]: E0813 00:17:28.924982 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.925065 kubelet[2741]: W0813 00:17:28.925038 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.925065 kubelet[2741]: E0813 00:17:28.925083 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.928716 kubelet[2741]: E0813 00:17:28.928654 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.928716 kubelet[2741]: W0813 00:17:28.928717 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.933309 kubelet[2741]: E0813 00:17:28.929289 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.933309 kubelet[2741]: E0813 00:17:28.930108 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.933309 kubelet[2741]: W0813 00:17:28.930136 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.933309 kubelet[2741]: E0813 00:17:28.930166 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.933309 kubelet[2741]: E0813 00:17:28.932559 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.933309 kubelet[2741]: W0813 00:17:28.932592 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.933309 kubelet[2741]: E0813 00:17:28.932832 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.938180 kubelet[2741]: E0813 00:17:28.936992 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.938180 kubelet[2741]: W0813 00:17:28.937043 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.941978 kubelet[2741]: E0813 00:17:28.941838 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.946465 kubelet[2741]: E0813 00:17:28.946319 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.946674 kubelet[2741]: W0813 00:17:28.946465 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.949760 kubelet[2741]: E0813 00:17:28.947241 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.949760 kubelet[2741]: E0813 00:17:28.947696 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.949760 kubelet[2741]: W0813 00:17:28.947749 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.949760 kubelet[2741]: E0813 00:17:28.949733 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.951332 kubelet[2741]: E0813 00:17:28.951048 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.951332 kubelet[2741]: W0813 00:17:28.951080 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.958611 kubelet[2741]: E0813 00:17:28.957519 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.958611 kubelet[2741]: W0813 00:17:28.957574 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.964604 kubelet[2741]: E0813 00:17:28.964426 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.972259 kubelet[2741]: E0813 00:17:28.966665 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.972259 kubelet[2741]: E0813 00:17:28.966791 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.978262 kubelet[2741]: W0813 00:17:28.974795 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.979922 kubelet[2741]: E0813 00:17:28.978515 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.992240 kubelet[2741]: E0813 00:17:28.989444 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.992240 kubelet[2741]: W0813 00:17:28.989488 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.992240 kubelet[2741]: E0813 00:17:28.989546 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.994822 kubelet[2741]: E0813 00:17:28.994783 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.995238 kubelet[2741]: W0813 00:17:28.995078 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:28.995457 kubelet[2741]: E0813 00:17:28.995426 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:28.999533 kubelet[2741]: E0813 00:17:28.999455 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:28.999533 kubelet[2741]: W0813 00:17:28.999497 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.001372 kubelet[2741]: E0813 00:17:28.999928 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.005273 kubelet[2741]: E0813 00:17:29.005112 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.006485 kubelet[2741]: W0813 00:17:29.005183 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.007669 kubelet[2741]: E0813 00:17:29.007373 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.014729 kubelet[2741]: E0813 00:17:29.014676 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.014729 kubelet[2741]: W0813 00:17:29.014723 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.015002 kubelet[2741]: E0813 00:17:29.014760 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.015002 kubelet[2741]: I0813 00:17:29.014814 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/851fc6af-b9af-4d67-92e5-4dcf6cbec03a-socket-dir\") pod \"csi-node-driver-6hw4j\" (UID: \"851fc6af-b9af-4d67-92e5-4dcf6cbec03a\") " pod="calico-system/csi-node-driver-6hw4j" Aug 13 00:17:29.018675 kubelet[2741]: E0813 00:17:29.018622 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.019662 kubelet[2741]: W0813 00:17:29.018738 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.019662 kubelet[2741]: E0813 00:17:29.019430 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.022033 kubelet[2741]: E0813 00:17:29.021974 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.022152 kubelet[2741]: W0813 00:17:29.022080 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.024443 kubelet[2741]: E0813 00:17:29.024363 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.027540 containerd[1592]: time="2025-08-13T00:17:29.025359733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:17:29.027540 containerd[1592]: time="2025-08-13T00:17:29.025470894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:17:29.027540 containerd[1592]: time="2025-08-13T00:17:29.025536294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:29.027540 containerd[1592]: time="2025-08-13T00:17:29.025737017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:29.031593 kubelet[2741]: E0813 00:17:29.031303 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.031593 kubelet[2741]: W0813 00:17:29.031459 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.035180 kubelet[2741]: E0813 00:17:29.031614 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.036974 kubelet[2741]: E0813 00:17:29.035983 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.036974 kubelet[2741]: W0813 00:17:29.036027 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.036974 kubelet[2741]: E0813 00:17:29.036064 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.038620 kubelet[2741]: I0813 00:17:29.037349 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/851fc6af-b9af-4d67-92e5-4dcf6cbec03a-registration-dir\") pod \"csi-node-driver-6hw4j\" (UID: \"851fc6af-b9af-4d67-92e5-4dcf6cbec03a\") " pod="calico-system/csi-node-driver-6hw4j" Aug 13 00:17:29.040098 kubelet[2741]: E0813 00:17:29.039589 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.040183 kubelet[2741]: W0813 00:17:29.039797 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.042941 kubelet[2741]: E0813 00:17:29.041357 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.042941 kubelet[2741]: I0813 00:17:29.041426 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/851fc6af-b9af-4d67-92e5-4dcf6cbec03a-varrun\") pod \"csi-node-driver-6hw4j\" (UID: \"851fc6af-b9af-4d67-92e5-4dcf6cbec03a\") " pod="calico-system/csi-node-driver-6hw4j" Aug 13 00:17:29.045642 kubelet[2741]: E0813 00:17:29.044463 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.045642 kubelet[2741]: W0813 00:17:29.044621 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.045642 kubelet[2741]: E0813 00:17:29.045085 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.059534 kubelet[2741]: E0813 00:17:29.055192 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.059534 kubelet[2741]: W0813 00:17:29.055271 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.059534 kubelet[2741]: E0813 00:17:29.057346 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.074184 kubelet[2741]: E0813 00:17:29.069219 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.074184 kubelet[2741]: W0813 00:17:29.069266 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.074184 kubelet[2741]: E0813 00:17:29.072761 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.074184 kubelet[2741]: I0813 00:17:29.073113 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/851fc6af-b9af-4d67-92e5-4dcf6cbec03a-kubelet-dir\") pod \"csi-node-driver-6hw4j\" (UID: \"851fc6af-b9af-4d67-92e5-4dcf6cbec03a\") " pod="calico-system/csi-node-driver-6hw4j" Aug 13 00:17:29.075577 kubelet[2741]: E0813 00:17:29.075159 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.077310 kubelet[2741]: W0813 00:17:29.076405 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.084395 kubelet[2741]: E0813 00:17:29.084315 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.087168 kubelet[2741]: E0813 00:17:29.087014 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.087168 kubelet[2741]: W0813 00:17:29.087069 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.093952 kubelet[2741]: E0813 00:17:29.090392 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.093952 kubelet[2741]: I0813 00:17:29.090483 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6q4p\" (UniqueName: \"kubernetes.io/projected/851fc6af-b9af-4d67-92e5-4dcf6cbec03a-kube-api-access-r6q4p\") pod \"csi-node-driver-6hw4j\" (UID: \"851fc6af-b9af-4d67-92e5-4dcf6cbec03a\") " pod="calico-system/csi-node-driver-6hw4j" Aug 13 00:17:29.093952 kubelet[2741]: E0813 00:17:29.092333 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.093952 kubelet[2741]: W0813 00:17:29.092364 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.096244 kubelet[2741]: E0813 00:17:29.094148 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.097554 kubelet[2741]: E0813 00:17:29.097481 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.097554 kubelet[2741]: W0813 00:17:29.097532 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.097774 kubelet[2741]: E0813 00:17:29.097567 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.103328 kubelet[2741]: E0813 00:17:29.102492 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.103328 kubelet[2741]: W0813 00:17:29.102539 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.104003 kubelet[2741]: E0813 00:17:29.103932 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.107779 kubelet[2741]: E0813 00:17:29.107601 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.107779 kubelet[2741]: W0813 00:17:29.107648 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.107779 kubelet[2741]: E0813 00:17:29.107687 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.114806 kubelet[2741]: E0813 00:17:29.114737 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.114806 kubelet[2741]: W0813 00:17:29.114789 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.115116 kubelet[2741]: E0813 00:17:29.114827 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.140470 kubelet[2741]: E0813 00:17:29.140407 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.140470 kubelet[2741]: W0813 00:17:29.140453 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.140762 kubelet[2741]: E0813 00:17:29.140495 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.200235 kubelet[2741]: E0813 00:17:29.199714 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.200235 kubelet[2741]: W0813 00:17:29.199790 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.200235 kubelet[2741]: E0813 00:17:29.199843 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.203134 kubelet[2741]: E0813 00:17:29.201814 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.203134 kubelet[2741]: W0813 00:17:29.201856 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.203134 kubelet[2741]: E0813 00:17:29.201943 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.203134 kubelet[2741]: E0813 00:17:29.202535 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.203134 kubelet[2741]: W0813 00:17:29.202556 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.203134 kubelet[2741]: E0813 00:17:29.202776 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.206351 kubelet[2741]: E0813 00:17:29.204251 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.206351 kubelet[2741]: W0813 00:17:29.204281 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.206351 kubelet[2741]: E0813 00:17:29.204333 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.206351 kubelet[2741]: E0813 00:17:29.205410 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.206351 kubelet[2741]: W0813 00:17:29.205436 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.206351 kubelet[2741]: E0813 00:17:29.205583 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.210111 kubelet[2741]: E0813 00:17:29.207405 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.210111 kubelet[2741]: W0813 00:17:29.207450 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.210111 kubelet[2741]: E0813 00:17:29.207633 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.210111 kubelet[2741]: E0813 00:17:29.208030 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.210111 kubelet[2741]: W0813 00:17:29.208051 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.210111 kubelet[2741]: E0813 00:17:29.208310 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.210111 kubelet[2741]: E0813 00:17:29.209426 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.210111 kubelet[2741]: W0813 00:17:29.209450 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.210111 kubelet[2741]: E0813 00:17:29.209646 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.215576 kubelet[2741]: E0813 00:17:29.210299 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.215576 kubelet[2741]: W0813 00:17:29.210325 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.215576 kubelet[2741]: E0813 00:17:29.210791 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.215576 kubelet[2741]: E0813 00:17:29.211529 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.215576 kubelet[2741]: W0813 00:17:29.211553 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.215576 kubelet[2741]: E0813 00:17:29.212880 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.215576 kubelet[2741]: E0813 00:17:29.213168 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.215576 kubelet[2741]: W0813 00:17:29.213190 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.215576 kubelet[2741]: E0813 00:17:29.213300 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.215576 kubelet[2741]: E0813 00:17:29.215296 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.221577 kubelet[2741]: W0813 00:17:29.215322 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.221577 kubelet[2741]: E0813 00:17:29.215426 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.221577 kubelet[2741]: E0813 00:17:29.215808 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.221577 kubelet[2741]: W0813 00:17:29.215826 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.221577 kubelet[2741]: E0813 00:17:29.216065 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.221577 kubelet[2741]: E0813 00:17:29.217054 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.221577 kubelet[2741]: W0813 00:17:29.217078 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.221577 kubelet[2741]: E0813 00:17:29.217313 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.221577 kubelet[2741]: E0813 00:17:29.217717 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.221577 kubelet[2741]: W0813 00:17:29.217739 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.222310 kubelet[2741]: E0813 00:17:29.217918 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.222310 kubelet[2741]: E0813 00:17:29.218222 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.222310 kubelet[2741]: W0813 00:17:29.218241 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.222310 kubelet[2741]: E0813 00:17:29.218595 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.222310 kubelet[2741]: E0813 00:17:29.218820 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.222310 kubelet[2741]: W0813 00:17:29.218838 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.222310 kubelet[2741]: E0813 00:17:29.219031 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.222310 kubelet[2741]: E0813 00:17:29.220493 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.222310 kubelet[2741]: W0813 00:17:29.220527 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.222310 kubelet[2741]: E0813 00:17:29.220858 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.223000 kubelet[2741]: E0813 00:17:29.222168 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.223000 kubelet[2741]: W0813 00:17:29.222195 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.223000 kubelet[2741]: E0813 00:17:29.223050 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.223000 kubelet[2741]: E0813 00:17:29.224695 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.223000 kubelet[2741]: W0813 00:17:29.224718 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.223000 kubelet[2741]: E0813 00:17:29.226097 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.223000 kubelet[2741]: E0813 00:17:29.226224 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.223000 kubelet[2741]: W0813 00:17:29.226243 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.223000 kubelet[2741]: E0813 00:17:29.226974 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.223000 kubelet[2741]: E0813 00:17:29.228194 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.231109 kubelet[2741]: W0813 00:17:29.228463 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.231109 kubelet[2741]: E0813 00:17:29.229549 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.231109 kubelet[2741]: E0813 00:17:29.230053 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.231109 kubelet[2741]: W0813 00:17:29.230077 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.231109 kubelet[2741]: E0813 00:17:29.230652 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.237066 kubelet[2741]: E0813 00:17:29.232405 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.237066 kubelet[2741]: W0813 00:17:29.232434 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.237066 kubelet[2741]: E0813 00:17:29.232565 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.237066 kubelet[2741]: E0813 00:17:29.234813 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.237066 kubelet[2741]: W0813 00:17:29.234840 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.237066 kubelet[2741]: E0813 00:17:29.234867 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.239544 containerd[1592]: time="2025-08-13T00:17:29.238587338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v65z4,Uid:fcd578c6-953e-4029-ba4c-02d865bd7730,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:29.279331 kubelet[2741]: E0813 00:17:29.278929 2741 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:17:29.279331 kubelet[2741]: W0813 00:17:29.278991 2741 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:17:29.279331 kubelet[2741]: E0813 00:17:29.279025 2741 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:17:29.333450 containerd[1592]: time="2025-08-13T00:17:29.328497981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:17:29.333450 containerd[1592]: time="2025-08-13T00:17:29.330811726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:17:29.333450 containerd[1592]: time="2025-08-13T00:17:29.331558414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:29.333450 containerd[1592]: time="2025-08-13T00:17:29.333096070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:29.591778 containerd[1592]: time="2025-08-13T00:17:29.590044064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v65z4,Uid:fcd578c6-953e-4029-ba4c-02d865bd7730,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f288948a5d64bdf666f4e173abe17021e8efd987eb5f7cd6999c4f00fa4b0aa\"" Aug 13 00:17:29.600272 containerd[1592]: time="2025-08-13T00:17:29.599735488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:17:29.604976 containerd[1592]: time="2025-08-13T00:17:29.603129484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-749d786678-c289n,Uid:12c00afb-e8c8-4968-8c72-3ab4f46148d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e48d3c6e8828ba7250645b260a3bdd87bee2a4dbe32d9c506a7d64b7ca55168\"" Aug 13 00:17:30.359266 kubelet[2741]: E0813 00:17:30.358042 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hw4j" podUID="851fc6af-b9af-4d67-92e5-4dcf6cbec03a" Aug 13 00:17:31.050853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358877990.mount: Deactivated successfully. Aug 13 00:17:31.223325 containerd[1592]: time="2025-08-13T00:17:31.223246783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:31.226291 containerd[1592]: time="2025-08-13T00:17:31.226133740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Aug 13 00:17:31.230417 containerd[1592]: time="2025-08-13T00:17:31.230308433Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:31.234748 containerd[1592]: time="2025-08-13T00:17:31.234641809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:31.237390 containerd[1592]: time="2025-08-13T00:17:31.236596834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.636779906s" Aug 13 00:17:31.237390 containerd[1592]: time="2025-08-13T00:17:31.236679715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:17:31.242880 containerd[1592]: time="2025-08-13T00:17:31.242771313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:17:31.250475 containerd[1592]: time="2025-08-13T00:17:31.250364610Z" level=info msg="CreateContainer within sandbox \"9f288948a5d64bdf666f4e173abe17021e8efd987eb5f7cd6999c4f00fa4b0aa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:17:31.287685 containerd[1592]: time="2025-08-13T00:17:31.286854558Z" level=info msg="CreateContainer within sandbox \"9f288948a5d64bdf666f4e173abe17021e8efd987eb5f7cd6999c4f00fa4b0aa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"210f0c715c4874ada7b2451ca3b1910879ec77738f22a09ec0e0ffdf4b517c1a\"" Aug 13 00:17:31.301195 containerd[1592]: time="2025-08-13T00:17:31.301102501Z" level=info msg="StartContainer for \"210f0c715c4874ada7b2451ca3b1910879ec77738f22a09ec0e0ffdf4b517c1a\"" Aug 13 00:17:31.454779 containerd[1592]: time="2025-08-13T00:17:31.454673750Z" level=info msg="StartContainer for \"210f0c715c4874ada7b2451ca3b1910879ec77738f22a09ec0e0ffdf4b517c1a\" returns successfully" Aug 13 00:17:31.600641 containerd[1592]: time="2025-08-13T00:17:31.600528100Z" level=info msg="shim disconnected" id=210f0c715c4874ada7b2451ca3b1910879ec77738f22a09ec0e0ffdf4b517c1a namespace=k8s.io Aug 13 00:17:31.600641 containerd[1592]: time="2025-08-13T00:17:31.600630022Z" level=warning msg="cleaning up after shim disconnected" id=210f0c715c4874ada7b2451ca3b1910879ec77738f22a09ec0e0ffdf4b517c1a namespace=k8s.io Aug 13 00:17:31.600641 containerd[1592]: time="2025-08-13T00:17:31.600654742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:17:31.972270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-210f0c715c4874ada7b2451ca3b1910879ec77738f22a09ec0e0ffdf4b517c1a-rootfs.mount: Deactivated successfully. Aug 13 00:17:32.358583 kubelet[2741]: E0813 00:17:32.358286 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hw4j" podUID="851fc6af-b9af-4d67-92e5-4dcf6cbec03a" Aug 13 00:17:34.047354 containerd[1592]: time="2025-08-13T00:17:34.047076224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:34.049539 containerd[1592]: time="2025-08-13T00:17:34.049295099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Aug 13 00:17:34.051451 containerd[1592]: time="2025-08-13T00:17:34.051335571Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:34.057785 containerd[1592]: time="2025-08-13T00:17:34.057542388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:34.060304 containerd[1592]: time="2025-08-13T00:17:34.059635301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.816778387s" Aug 13 00:17:34.060304 containerd[1592]: time="2025-08-13T00:17:34.059713783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:17:34.063387 containerd[1592]: time="2025-08-13T00:17:34.062963594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:17:34.106887 containerd[1592]: time="2025-08-13T00:17:34.106471959Z" level=info msg="CreateContainer within sandbox \"3e48d3c6e8828ba7250645b260a3bdd87bee2a4dbe32d9c506a7d64b7ca55168\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:17:34.136331 containerd[1592]: time="2025-08-13T00:17:34.136104105Z" level=info msg="CreateContainer within sandbox \"3e48d3c6e8828ba7250645b260a3bdd87bee2a4dbe32d9c506a7d64b7ca55168\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4d8567d1c85d26e40ae3780e641c425f5c53e101443fa1aa3d7e526c7f6bd7c9\"" Aug 13 00:17:34.143892 containerd[1592]: time="2025-08-13T00:17:34.142340203Z" level=info msg="StartContainer for \"4d8567d1c85d26e40ae3780e641c425f5c53e101443fa1aa3d7e526c7f6bd7c9\"" Aug 13 00:17:34.299436 containerd[1592]: time="2025-08-13T00:17:34.296751554Z" level=info msg="StartContainer for \"4d8567d1c85d26e40ae3780e641c425f5c53e101443fa1aa3d7e526c7f6bd7c9\" returns successfully" Aug 13 00:17:34.358315 kubelet[2741]: E0813 00:17:34.358216 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hw4j" podUID="851fc6af-b9af-4d67-92e5-4dcf6cbec03a" Aug 13 00:17:35.042505 kubelet[2741]: I0813 00:17:35.041173 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-749d786678-c289n" podStartSLOduration=2.590897501 podStartE2EDuration="7.041140867s" podCreationTimestamp="2025-08-13 00:17:28 +0000 UTC" firstStartedPulling="2025-08-13 00:17:29.612467344 +0000 UTC m=+40.880191988" lastFinishedPulling="2025-08-13 00:17:34.06271071 +0000 UTC m=+45.330435354" observedRunningTime="2025-08-13 00:17:35.038277059 +0000 UTC m=+46.306001703" watchObservedRunningTime="2025-08-13 00:17:35.041140867 +0000 UTC m=+46.308865471" Aug 13 00:17:36.358862 kubelet[2741]: E0813 00:17:36.358667 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hw4j" podUID="851fc6af-b9af-4d67-92e5-4dcf6cbec03a" Aug 13 00:17:38.183251 containerd[1592]: time="2025-08-13T00:17:38.183101442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:38.186010 containerd[1592]: time="2025-08-13T00:17:38.185874575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:17:38.188638 containerd[1592]: time="2025-08-13T00:17:38.188532786Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:38.194481 containerd[1592]: time="2025-08-13T00:17:38.194285057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:38.197468 containerd[1592]: time="2025-08-13T00:17:38.196822466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 4.13378191s" Aug 13 00:17:38.197468 containerd[1592]: time="2025-08-13T00:17:38.196904147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:17:38.208958 containerd[1592]: time="2025-08-13T00:17:38.208826056Z" level=info msg="CreateContainer within sandbox \"9f288948a5d64bdf666f4e173abe17021e8efd987eb5f7cd6999c4f00fa4b0aa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:17:38.240655 containerd[1592]: time="2025-08-13T00:17:38.239971655Z" level=info msg="CreateContainer within sandbox \"9f288948a5d64bdf666f4e173abe17021e8efd987eb5f7cd6999c4f00fa4b0aa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c3f142b58311bf5c858e6d07361494bb2027324c86a1db35096400cbadbbe83e\"" Aug 13 00:17:38.243924 containerd[1592]: time="2025-08-13T00:17:38.242902671Z" level=info msg="StartContainer for \"c3f142b58311bf5c858e6d07361494bb2027324c86a1db35096400cbadbbe83e\"" Aug 13 00:17:38.358195 kubelet[2741]: E0813 00:17:38.358129 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6hw4j" podUID="851fc6af-b9af-4d67-92e5-4dcf6cbec03a" Aug 13 00:17:38.397734 containerd[1592]: time="2025-08-13T00:17:38.397540524Z" level=info msg="StartContainer for \"c3f142b58311bf5c858e6d07361494bb2027324c86a1db35096400cbadbbe83e\" returns successfully" Aug 13 00:17:39.453689 containerd[1592]: time="2025-08-13T00:17:39.453600668Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:17:39.513588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3f142b58311bf5c858e6d07361494bb2027324c86a1db35096400cbadbbe83e-rootfs.mount: Deactivated successfully. Aug 13 00:17:39.557382 kubelet[2741]: I0813 00:17:39.557302 2741 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:17:39.615330 containerd[1592]: time="2025-08-13T00:17:39.614071522Z" level=info msg="shim disconnected" id=c3f142b58311bf5c858e6d07361494bb2027324c86a1db35096400cbadbbe83e namespace=k8s.io Aug 13 00:17:39.615330 containerd[1592]: time="2025-08-13T00:17:39.614174324Z" level=warning msg="cleaning up after shim disconnected" id=c3f142b58311bf5c858e6d07361494bb2027324c86a1db35096400cbadbbe83e namespace=k8s.io Aug 13 00:17:39.615330 containerd[1592]: time="2025-08-13T00:17:39.614196325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:17:39.719048 containerd[1592]: time="2025-08-13T00:17:39.717132946Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:17:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:17:39.728332 kubelet[2741]: I0813 00:17:39.727977 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmhdj\" (UniqueName: \"kubernetes.io/projected/2fa714a0-45d3-4601-af8c-8a6aebd91ca9-kube-api-access-fmhdj\") pod \"coredns-7c65d6cfc9-gzcsq\" (UID: \"2fa714a0-45d3-4601-af8c-8a6aebd91ca9\") " pod="kube-system/coredns-7c65d6cfc9-gzcsq" Aug 13 00:17:39.728652 kubelet[2741]: I0813 00:17:39.728445 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf9hl\" (UniqueName: \"kubernetes.io/projected/6428bddd-411d-486c-a798-4f373ec4640c-kube-api-access-nf9hl\") pod \"coredns-7c65d6cfc9-nf8md\" (UID: \"6428bddd-411d-486c-a798-4f373ec4640c\") " pod="kube-system/coredns-7c65d6cfc9-nf8md" Aug 13 00:17:39.728790 kubelet[2741]: I0813 00:17:39.728634 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6428bddd-411d-486c-a798-4f373ec4640c-config-volume\") pod \"coredns-7c65d6cfc9-nf8md\" (UID: \"6428bddd-411d-486c-a798-4f373ec4640c\") " pod="kube-system/coredns-7c65d6cfc9-nf8md" Aug 13 00:17:39.728868 kubelet[2741]: I0813 00:17:39.728822 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fa714a0-45d3-4601-af8c-8a6aebd91ca9-config-volume\") pod \"coredns-7c65d6cfc9-gzcsq\" (UID: \"2fa714a0-45d3-4601-af8c-8a6aebd91ca9\") " pod="kube-system/coredns-7c65d6cfc9-gzcsq" Aug 13 00:17:39.830901 kubelet[2741]: I0813 00:17:39.829967 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/211992a4-e03b-407a-b6e8-049cb37a8c67-goldmane-key-pair\") pod \"goldmane-58fd7646b9-75vg2\" (UID: \"211992a4-e03b-407a-b6e8-049cb37a8c67\") " pod="calico-system/goldmane-58fd7646b9-75vg2" Aug 13 00:17:39.830901 kubelet[2741]: I0813 00:17:39.830094 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660c42fe-0b74-4f87-a47c-3e9a64771e8c-tigera-ca-bundle\") pod \"calico-kube-controllers-855f47cdff-5778s\" (UID: \"660c42fe-0b74-4f87-a47c-3e9a64771e8c\") " pod="calico-system/calico-kube-controllers-855f47cdff-5778s" Aug 13 00:17:39.830901 kubelet[2741]: I0813 00:17:39.830145 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnjj6\" (UniqueName: \"kubernetes.io/projected/211992a4-e03b-407a-b6e8-049cb37a8c67-kube-api-access-mnjj6\") pod \"goldmane-58fd7646b9-75vg2\" (UID: \"211992a4-e03b-407a-b6e8-049cb37a8c67\") " pod="calico-system/goldmane-58fd7646b9-75vg2" Aug 13 00:17:39.837259 kubelet[2741]: I0813 00:17:39.830254 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/211992a4-e03b-407a-b6e8-049cb37a8c67-config\") pod \"goldmane-58fd7646b9-75vg2\" (UID: \"211992a4-e03b-407a-b6e8-049cb37a8c67\") " pod="calico-system/goldmane-58fd7646b9-75vg2" Aug 13 00:17:39.837259 kubelet[2741]: I0813 00:17:39.833072 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f19c543-3502-4177-93bc-f402734db516-calico-apiserver-certs\") pod \"calico-apiserver-79d645b7db-qg8sr\" (UID: \"2f19c543-3502-4177-93bc-f402734db516\") " pod="calico-apiserver/calico-apiserver-79d645b7db-qg8sr" Aug 13 00:17:39.837259 kubelet[2741]: I0813 00:17:39.833937 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92hb\" (UniqueName: \"kubernetes.io/projected/984ef076-047a-48f9-82ef-15fe5f9ec37d-kube-api-access-q92hb\") pod \"whisker-6cc5bfd4c-rq45m\" (UID: \"984ef076-047a-48f9-82ef-15fe5f9ec37d\") " pod="calico-system/whisker-6cc5bfd4c-rq45m" Aug 13 00:17:39.837259 kubelet[2741]: I0813 00:17:39.834098 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/211992a4-e03b-407a-b6e8-049cb37a8c67-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-75vg2\" (UID: \"211992a4-e03b-407a-b6e8-049cb37a8c67\") " pod="calico-system/goldmane-58fd7646b9-75vg2" Aug 13 00:17:39.837259 kubelet[2741]: I0813 00:17:39.834174 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffdbj\" (UniqueName: \"kubernetes.io/projected/2f19c543-3502-4177-93bc-f402734db516-kube-api-access-ffdbj\") pod \"calico-apiserver-79d645b7db-qg8sr\" (UID: \"2f19c543-3502-4177-93bc-f402734db516\") " pod="calico-apiserver/calico-apiserver-79d645b7db-qg8sr" Aug 13 00:17:39.837871 kubelet[2741]: I0813 00:17:39.834279 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpd8w\" (UniqueName: \"kubernetes.io/projected/660c42fe-0b74-4f87-a47c-3e9a64771e8c-kube-api-access-bpd8w\") pod \"calico-kube-controllers-855f47cdff-5778s\" (UID: \"660c42fe-0b74-4f87-a47c-3e9a64771e8c\") " pod="calico-system/calico-kube-controllers-855f47cdff-5778s" Aug 13 00:17:39.837871 kubelet[2741]: I0813 00:17:39.834363 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/984ef076-047a-48f9-82ef-15fe5f9ec37d-whisker-ca-bundle\") pod \"whisker-6cc5bfd4c-rq45m\" (UID: \"984ef076-047a-48f9-82ef-15fe5f9ec37d\") " pod="calico-system/whisker-6cc5bfd4c-rq45m" Aug 13 00:17:39.837871 kubelet[2741]: I0813 00:17:39.834508 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/960cd058-0fa0-479e-b7da-fdbdd01280da-calico-apiserver-certs\") pod \"calico-apiserver-79d645b7db-rhp59\" (UID: \"960cd058-0fa0-479e-b7da-fdbdd01280da\") " pod="calico-apiserver/calico-apiserver-79d645b7db-rhp59" Aug 13 00:17:39.837871 kubelet[2741]: I0813 00:17:39.834555 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6hk\" (UniqueName: \"kubernetes.io/projected/960cd058-0fa0-479e-b7da-fdbdd01280da-kube-api-access-5t6hk\") pod \"calico-apiserver-79d645b7db-rhp59\" (UID: \"960cd058-0fa0-479e-b7da-fdbdd01280da\") " pod="calico-apiserver/calico-apiserver-79d645b7db-rhp59" Aug 13 00:17:39.837871 kubelet[2741]: I0813 00:17:39.834945 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/984ef076-047a-48f9-82ef-15fe5f9ec37d-whisker-backend-key-pair\") pod \"whisker-6cc5bfd4c-rq45m\" (UID: \"984ef076-047a-48f9-82ef-15fe5f9ec37d\") " pod="calico-system/whisker-6cc5bfd4c-rq45m" Aug 13 00:17:40.019905 containerd[1592]: time="2025-08-13T00:17:40.019686940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzcsq,Uid:2fa714a0-45d3-4601-af8c-8a6aebd91ca9,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:40.083176 containerd[1592]: time="2025-08-13T00:17:40.083086139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nf8md,Uid:6428bddd-411d-486c-a798-4f373ec4640c,Namespace:kube-system,Attempt:0,}" Aug 13 00:17:40.099848 containerd[1592]: time="2025-08-13T00:17:40.099244075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d645b7db-qg8sr,Uid:2f19c543-3502-4177-93bc-f402734db516,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:17:40.120981 containerd[1592]: time="2025-08-13T00:17:40.120910886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:17:40.363912 containerd[1592]: time="2025-08-13T00:17:40.363562774Z" level=error msg="Failed to destroy network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.366267 containerd[1592]: time="2025-08-13T00:17:40.365831221Z" level=error msg="encountered an error cleaning up failed sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.366267 containerd[1592]: time="2025-08-13T00:17:40.365955464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nf8md,Uid:6428bddd-411d-486c-a798-4f373ec4640c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.368662 kubelet[2741]: E0813 00:17:40.366479 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.368662 kubelet[2741]: E0813 00:17:40.366589 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nf8md" Aug 13 00:17:40.368662 kubelet[2741]: E0813 00:17:40.366638 2741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nf8md" Aug 13 00:17:40.368947 kubelet[2741]: E0813 00:17:40.366714 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nf8md_kube-system(6428bddd-411d-486c-a798-4f373ec4640c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nf8md_kube-system(6428bddd-411d-486c-a798-4f373ec4640c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nf8md" podUID="6428bddd-411d-486c-a798-4f373ec4640c" Aug 13 00:17:40.376555 containerd[1592]: time="2025-08-13T00:17:40.375806629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hw4j,Uid:851fc6af-b9af-4d67-92e5-4dcf6cbec03a,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:40.381410 containerd[1592]: time="2025-08-13T00:17:40.379777912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-75vg2,Uid:211992a4-e03b-407a-b6e8-049cb37a8c67,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:40.404869 containerd[1592]: time="2025-08-13T00:17:40.403877693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc5bfd4c-rq45m,Uid:984ef076-047a-48f9-82ef-15fe5f9ec37d,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:40.404869 containerd[1592]: time="2025-08-13T00:17:40.404673390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855f47cdff-5778s,Uid:660c42fe-0b74-4f87-a47c-3e9a64771e8c,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:40.405924 containerd[1592]: time="2025-08-13T00:17:40.405834494Z" level=error msg="Failed to destroy network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.407984 containerd[1592]: time="2025-08-13T00:17:40.407909097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d645b7db-rhp59,Uid:960cd058-0fa0-479e-b7da-fdbdd01280da,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:17:40.409365 containerd[1592]: time="2025-08-13T00:17:40.409274325Z" level=error msg="encountered an error cleaning up failed sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.409677 containerd[1592]: time="2025-08-13T00:17:40.409619172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzcsq,Uid:2fa714a0-45d3-4601-af8c-8a6aebd91ca9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.412268 kubelet[2741]: E0813 00:17:40.410299 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.412268 kubelet[2741]: E0813 00:17:40.410436 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gzcsq" Aug 13 00:17:40.412268 kubelet[2741]: E0813 00:17:40.410484 2741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gzcsq" Aug 13 00:17:40.412590 kubelet[2741]: E0813 00:17:40.410560 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gzcsq_kube-system(2fa714a0-45d3-4601-af8c-8a6aebd91ca9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gzcsq_kube-system(2fa714a0-45d3-4601-af8c-8a6aebd91ca9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gzcsq" podUID="2fa714a0-45d3-4601-af8c-8a6aebd91ca9" Aug 13 00:17:40.631643 containerd[1592]: time="2025-08-13T00:17:40.630176481Z" level=error msg="Failed to destroy network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.642545 containerd[1592]: time="2025-08-13T00:17:40.642457177Z" level=error msg="encountered an error cleaning up failed sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.659228 containerd[1592]: time="2025-08-13T00:17:40.657123122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d645b7db-qg8sr,Uid:2f19c543-3502-4177-93bc-f402734db516,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.670485 kubelet[2741]: E0813 00:17:40.660741 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.670485 kubelet[2741]: E0813 00:17:40.662079 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d645b7db-qg8sr" Aug 13 00:17:40.670485 kubelet[2741]: E0813 00:17:40.662128 2741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d645b7db-qg8sr" Aug 13 00:17:40.667124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6-shm.mount: Deactivated successfully. Aug 13 00:17:40.675670 kubelet[2741]: E0813 00:17:40.662266 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79d645b7db-qg8sr_calico-apiserver(2f19c543-3502-4177-93bc-f402734db516)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79d645b7db-qg8sr_calico-apiserver(2f19c543-3502-4177-93bc-f402734db516)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d645b7db-qg8sr" podUID="2f19c543-3502-4177-93bc-f402734db516" Aug 13 00:17:40.978443 containerd[1592]: time="2025-08-13T00:17:40.978298724Z" level=error msg="Failed to destroy network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.985393 containerd[1592]: time="2025-08-13T00:17:40.985059465Z" level=error msg="encountered an error cleaning up failed sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.985587 containerd[1592]: time="2025-08-13T00:17:40.985421192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hw4j,Uid:851fc6af-b9af-4d67-92e5-4dcf6cbec03a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.987524 kubelet[2741]: E0813 00:17:40.987437 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:40.987707 kubelet[2741]: E0813 00:17:40.987540 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hw4j" Aug 13 00:17:40.987707 kubelet[2741]: E0813 00:17:40.987587 2741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6hw4j" Aug 13 00:17:40.987707 kubelet[2741]: E0813 00:17:40.987660 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6hw4j_calico-system(851fc6af-b9af-4d67-92e5-4dcf6cbec03a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6hw4j_calico-system(851fc6af-b9af-4d67-92e5-4dcf6cbec03a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hw4j" podUID="851fc6af-b9af-4d67-92e5-4dcf6cbec03a" Aug 13 00:17:40.989864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c-shm.mount: Deactivated successfully. Aug 13 00:17:40.999169 containerd[1592]: time="2025-08-13T00:17:40.999006835Z" level=error msg="Failed to destroy network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.004111 containerd[1592]: time="2025-08-13T00:17:41.003892139Z" level=error msg="encountered an error cleaning up failed sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.009037 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480-shm.mount: Deactivated successfully. Aug 13 00:17:41.012844 containerd[1592]: time="2025-08-13T00:17:41.009871228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855f47cdff-5778s,Uid:660c42fe-0b74-4f87-a47c-3e9a64771e8c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.013165 kubelet[2741]: E0813 00:17:41.012858 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.013165 kubelet[2741]: E0813 00:17:41.012947 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855f47cdff-5778s" Aug 13 00:17:41.013165 kubelet[2741]: E0813 00:17:41.012989 2741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855f47cdff-5778s" Aug 13 00:17:41.013475 kubelet[2741]: E0813 00:17:41.013090 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-855f47cdff-5778s_calico-system(660c42fe-0b74-4f87-a47c-3e9a64771e8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-855f47cdff-5778s_calico-system(660c42fe-0b74-4f87-a47c-3e9a64771e8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855f47cdff-5778s" podUID="660c42fe-0b74-4f87-a47c-3e9a64771e8c" Aug 13 00:17:41.029760 containerd[1592]: time="2025-08-13T00:17:41.029666934Z" level=error msg="Failed to destroy network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.030658 containerd[1592]: time="2025-08-13T00:17:41.030436831Z" level=error msg="encountered an error cleaning up failed sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.030658 containerd[1592]: time="2025-08-13T00:17:41.030555513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d645b7db-rhp59,Uid:960cd058-0fa0-479e-b7da-fdbdd01280da,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.031412 kubelet[2741]: E0813 00:17:41.031080 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.031412 kubelet[2741]: E0813 00:17:41.031246 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d645b7db-rhp59" Aug 13 00:17:41.031412 kubelet[2741]: E0813 00:17:41.031289 2741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d645b7db-rhp59" Aug 13 00:17:41.031970 kubelet[2741]: E0813 00:17:41.031612 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79d645b7db-rhp59_calico-apiserver(960cd058-0fa0-479e-b7da-fdbdd01280da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79d645b7db-rhp59_calico-apiserver(960cd058-0fa0-479e-b7da-fdbdd01280da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d645b7db-rhp59" podUID="960cd058-0fa0-479e-b7da-fdbdd01280da" Aug 13 00:17:41.070490 containerd[1592]: time="2025-08-13T00:17:41.070322211Z" level=error msg="Failed to destroy network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.072233 containerd[1592]: time="2025-08-13T00:17:41.071976887Z" level=error msg="encountered an error cleaning up failed sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.073306 containerd[1592]: time="2025-08-13T00:17:41.072169211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-75vg2,Uid:211992a4-e03b-407a-b6e8-049cb37a8c67,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.073727 kubelet[2741]: E0813 00:17:41.073664 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.073866 kubelet[2741]: E0813 00:17:41.073769 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-75vg2" Aug 13 00:17:41.073866 kubelet[2741]: E0813 00:17:41.073811 2741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-75vg2" Aug 13 00:17:41.074040 kubelet[2741]: E0813 00:17:41.073913 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-75vg2_calico-system(211992a4-e03b-407a-b6e8-049cb37a8c67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-75vg2_calico-system(211992a4-e03b-407a-b6e8-049cb37a8c67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-75vg2" podUID="211992a4-e03b-407a-b6e8-049cb37a8c67" Aug 13 00:17:41.077878 containerd[1592]: time="2025-08-13T00:17:41.077433164Z" level=error msg="Failed to destroy network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.078831 containerd[1592]: time="2025-08-13T00:17:41.078571949Z" level=error msg="encountered an error cleaning up failed sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.078831 containerd[1592]: time="2025-08-13T00:17:41.078694111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc5bfd4c-rq45m,Uid:984ef076-047a-48f9-82ef-15fe5f9ec37d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.079527 kubelet[2741]: E0813 00:17:41.079409 2741 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.079661 kubelet[2741]: E0813 00:17:41.079553 2741 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cc5bfd4c-rq45m" Aug 13 00:17:41.079661 kubelet[2741]: E0813 00:17:41.079617 2741 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cc5bfd4c-rq45m" Aug 13 00:17:41.079803 kubelet[2741]: E0813 00:17:41.079736 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cc5bfd4c-rq45m_calico-system(984ef076-047a-48f9-82ef-15fe5f9ec37d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cc5bfd4c-rq45m_calico-system(984ef076-047a-48f9-82ef-15fe5f9ec37d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cc5bfd4c-rq45m" podUID="984ef076-047a-48f9-82ef-15fe5f9ec37d" Aug 13 00:17:41.123274 kubelet[2741]: I0813 00:17:41.121764 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:17:41.126671 containerd[1592]: time="2025-08-13T00:17:41.126577184Z" level=info msg="StopPodSandbox for \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\"" Aug 13 00:17:41.126999 containerd[1592]: time="2025-08-13T00:17:41.126943632Z" level=info msg="Ensure that sandbox f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f in task-service has been cleanup successfully" Aug 13 00:17:41.136573 kubelet[2741]: I0813 00:17:41.136500 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:17:41.140195 containerd[1592]: time="2025-08-13T00:17:41.140015193Z" level=info msg="StopPodSandbox for \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\"" Aug 13 00:17:41.141302 containerd[1592]: time="2025-08-13T00:17:41.141190659Z" level=info msg="Ensure that sandbox 6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265 in task-service has been cleanup successfully" Aug 13 00:17:41.147682 kubelet[2741]: I0813 00:17:41.147469 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:17:41.149601 containerd[1592]: time="2025-08-13T00:17:41.149524198Z" level=info msg="StopPodSandbox for \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\"" Aug 13 00:17:41.151558 containerd[1592]: time="2025-08-13T00:17:41.149892086Z" level=info msg="Ensure that sandbox 71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480 in task-service has been cleanup successfully" Aug 13 00:17:41.158751 kubelet[2741]: I0813 00:17:41.157606 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:17:41.162542 containerd[1592]: time="2025-08-13T00:17:41.162451797Z" level=info msg="StopPodSandbox for \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\"" Aug 13 00:17:41.162863 containerd[1592]: time="2025-08-13T00:17:41.162807045Z" level=info msg="Ensure that sandbox 89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f in task-service has been cleanup successfully" Aug 13 00:17:41.173883 kubelet[2741]: I0813 00:17:41.173842 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:17:41.182374 containerd[1592]: time="2025-08-13T00:17:41.181826375Z" level=info msg="StopPodSandbox for \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\"" Aug 13 00:17:41.186945 containerd[1592]: time="2025-08-13T00:17:41.186536396Z" level=info msg="Ensure that sandbox 88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6 in task-service has been cleanup successfully" Aug 13 00:17:41.199589 kubelet[2741]: I0813 00:17:41.199533 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:17:41.207693 containerd[1592]: time="2025-08-13T00:17:41.207352845Z" level=info msg="StopPodSandbox for \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\"" Aug 13 00:17:41.209421 containerd[1592]: time="2025-08-13T00:17:41.209353368Z" level=info msg="Ensure that sandbox ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6 in task-service has been cleanup successfully" Aug 13 00:17:41.239784 kubelet[2741]: I0813 00:17:41.236980 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:17:41.244701 containerd[1592]: time="2025-08-13T00:17:41.243899713Z" level=info msg="StopPodSandbox for \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\"" Aug 13 00:17:41.253265 containerd[1592]: time="2025-08-13T00:17:41.251585199Z" level=info msg="Ensure that sandbox c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c in task-service has been cleanup successfully" Aug 13 00:17:41.266853 kubelet[2741]: I0813 00:17:41.266805 2741 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:17:41.273122 containerd[1592]: time="2025-08-13T00:17:41.272338406Z" level=info msg="StopPodSandbox for \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\"" Aug 13 00:17:41.273122 containerd[1592]: time="2025-08-13T00:17:41.272734015Z" level=info msg="Ensure that sandbox e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976 in task-service has been cleanup successfully" Aug 13 00:17:41.381169 containerd[1592]: time="2025-08-13T00:17:41.380783584Z" level=error msg="StopPodSandbox for \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\" failed" error="failed to destroy network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.381450 kubelet[2741]: E0813 00:17:41.381177 2741 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:17:41.381450 kubelet[2741]: E0813 00:17:41.381296 2741 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f"} Aug 13 00:17:41.381450 kubelet[2741]: E0813 00:17:41.381403 2741 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"984ef076-047a-48f9-82ef-15fe5f9ec37d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:17:41.381784 kubelet[2741]: E0813 00:17:41.381449 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"984ef076-047a-48f9-82ef-15fe5f9ec37d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cc5bfd4c-rq45m" podUID="984ef076-047a-48f9-82ef-15fe5f9ec37d" Aug 13 00:17:41.447772 containerd[1592]: time="2025-08-13T00:17:41.446234315Z" level=error msg="StopPodSandbox for \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\" failed" error="failed to destroy network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.448044 kubelet[2741]: E0813 00:17:41.446657 2741 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:17:41.448044 kubelet[2741]: E0813 00:17:41.446738 2741 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6"} Aug 13 00:17:41.448044 kubelet[2741]: E0813 00:17:41.446825 2741 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f19c543-3502-4177-93bc-f402734db516\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:17:41.448044 kubelet[2741]: E0813 00:17:41.446875 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f19c543-3502-4177-93bc-f402734db516\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d645b7db-qg8sr" podUID="2f19c543-3502-4177-93bc-f402734db516" Aug 13 00:17:41.472136 containerd[1592]: time="2025-08-13T00:17:41.471375217Z" level=error msg="StopPodSandbox for \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\" failed" error="failed to destroy network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.473976 kubelet[2741]: E0813 00:17:41.471777 2741 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:17:41.473976 kubelet[2741]: E0813 00:17:41.471860 2741 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265"} Aug 13 00:17:41.473976 kubelet[2741]: E0813 00:17:41.471935 2741 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"960cd058-0fa0-479e-b7da-fdbdd01280da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:17:41.473976 kubelet[2741]: E0813 00:17:41.471983 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"960cd058-0fa0-479e-b7da-fdbdd01280da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d645b7db-rhp59" podUID="960cd058-0fa0-479e-b7da-fdbdd01280da" Aug 13 00:17:41.493757 containerd[1592]: time="2025-08-13T00:17:41.489419886Z" level=error msg="StopPodSandbox for \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\" failed" error="failed to destroy network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.502719 kubelet[2741]: E0813 00:17:41.502381 2741 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:17:41.502719 kubelet[2741]: E0813 00:17:41.502493 2741 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f"} Aug 13 00:17:41.502719 kubelet[2741]: E0813 00:17:41.502563 2741 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6428bddd-411d-486c-a798-4f373ec4640c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:17:41.502719 kubelet[2741]: E0813 00:17:41.502615 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6428bddd-411d-486c-a798-4f373ec4640c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nf8md" podUID="6428bddd-411d-486c-a798-4f373ec4640c" Aug 13 00:17:41.505288 containerd[1592]: time="2025-08-13T00:17:41.505158666Z" level=error msg="StopPodSandbox for \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\" failed" error="failed to destroy network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.505748 kubelet[2741]: E0813 00:17:41.505688 2741 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:17:41.511531 kubelet[2741]: E0813 00:17:41.511309 2741 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c"} Aug 13 00:17:41.511531 kubelet[2741]: E0813 00:17:41.511410 2741 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"851fc6af-b9af-4d67-92e5-4dcf6cbec03a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:17:41.511531 kubelet[2741]: E0813 00:17:41.511469 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"851fc6af-b9af-4d67-92e5-4dcf6cbec03a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6hw4j" podUID="851fc6af-b9af-4d67-92e5-4dcf6cbec03a" Aug 13 00:17:41.515571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f-shm.mount: Deactivated successfully. Aug 13 00:17:41.515953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6-shm.mount: Deactivated successfully. Aug 13 00:17:41.516647 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265-shm.mount: Deactivated successfully. Aug 13 00:17:41.534671 containerd[1592]: time="2025-08-13T00:17:41.534559340Z" level=error msg="StopPodSandbox for \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\" failed" error="failed to destroy network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.535005 kubelet[2741]: E0813 00:17:41.534933 2741 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:17:41.535295 kubelet[2741]: E0813 00:17:41.535091 2741 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480"} Aug 13 00:17:41.535295 kubelet[2741]: E0813 00:17:41.535164 2741 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"660c42fe-0b74-4f87-a47c-3e9a64771e8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:17:41.535658 kubelet[2741]: E0813 00:17:41.535333 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"660c42fe-0b74-4f87-a47c-3e9a64771e8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855f47cdff-5778s" podUID="660c42fe-0b74-4f87-a47c-3e9a64771e8c" Aug 13 00:17:41.538472 containerd[1592]: time="2025-08-13T00:17:41.537407521Z" level=error msg="StopPodSandbox for \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\" failed" error="failed to destroy network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.540322 kubelet[2741]: E0813 00:17:41.537914 2741 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:17:41.540322 kubelet[2741]: E0813 00:17:41.537992 2741 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6"} Aug 13 00:17:41.540322 kubelet[2741]: E0813 00:17:41.538082 2741 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"211992a4-e03b-407a-b6e8-049cb37a8c67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:17:41.540322 kubelet[2741]: E0813 00:17:41.538142 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"211992a4-e03b-407a-b6e8-049cb37a8c67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-75vg2" podUID="211992a4-e03b-407a-b6e8-049cb37a8c67" Aug 13 00:17:41.549616 containerd[1592]: time="2025-08-13T00:17:41.549529622Z" level=error msg="StopPodSandbox for \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\" failed" error="failed to destroy network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:17:41.550320 kubelet[2741]: E0813 00:17:41.550175 2741 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:17:41.550500 kubelet[2741]: E0813 00:17:41.550304 2741 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976"} Aug 13 00:17:41.550500 kubelet[2741]: E0813 00:17:41.550422 2741 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2fa714a0-45d3-4601-af8c-8a6aebd91ca9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:17:41.550500 kubelet[2741]: E0813 00:17:41.550471 2741 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2fa714a0-45d3-4601-af8c-8a6aebd91ca9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gzcsq" podUID="2fa714a0-45d3-4601-af8c-8a6aebd91ca9" Aug 13 00:17:49.330491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394306471.mount: Deactivated successfully. Aug 13 00:17:49.404627 containerd[1592]: time="2025-08-13T00:17:49.404491999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:49.407448 containerd[1592]: time="2025-08-13T00:17:49.407331395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:17:49.409784 containerd[1592]: time="2025-08-13T00:17:49.409677698Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:49.417864 containerd[1592]: time="2025-08-13T00:17:49.417690713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:49.421149 containerd[1592]: time="2025-08-13T00:17:49.421066884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 9.299409182s" Aug 13 00:17:49.421572 containerd[1592]: time="2025-08-13T00:17:49.421444454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:17:49.485271 containerd[1592]: time="2025-08-13T00:17:49.485116200Z" level=info msg="CreateContainer within sandbox \"9f288948a5d64bdf666f4e173abe17021e8efd987eb5f7cd6999c4f00fa4b0aa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:17:49.533790 containerd[1592]: time="2025-08-13T00:17:49.533674782Z" level=info msg="CreateContainer within sandbox \"9f288948a5d64bdf666f4e173abe17021e8efd987eb5f7cd6999c4f00fa4b0aa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"77c9a39643da9bf33856483de4796e772446ef9964b475fc8ed334de02b815d5\"" Aug 13 00:17:49.535167 containerd[1592]: time="2025-08-13T00:17:49.534948816Z" level=info msg="StartContainer for \"77c9a39643da9bf33856483de4796e772446ef9964b475fc8ed334de02b815d5\"" Aug 13 00:17:49.690384 containerd[1592]: time="2025-08-13T00:17:49.690259899Z" level=info msg="StartContainer for \"77c9a39643da9bf33856483de4796e772446ef9964b475fc8ed334de02b815d5\" returns successfully" Aug 13 00:17:49.971469 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:17:49.971734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:17:50.276958 containerd[1592]: time="2025-08-13T00:17:50.276873139Z" level=info msg="StopPodSandbox for \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\"" Aug 13 00:17:50.734124 kubelet[2741]: I0813 00:17:50.733628 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v65z4" podStartSLOduration=2.905835608 podStartE2EDuration="22.733591161s" podCreationTimestamp="2025-08-13 00:17:28 +0000 UTC" firstStartedPulling="2025-08-13 00:17:29.598792198 +0000 UTC m=+40.866516842" lastFinishedPulling="2025-08-13 00:17:49.426547791 +0000 UTC m=+60.694272395" observedRunningTime="2025-08-13 00:17:50.443094569 +0000 UTC m=+61.710819213" watchObservedRunningTime="2025-08-13 00:17:50.733591161 +0000 UTC m=+62.001315925" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.739 [INFO][3888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.739 [INFO][3888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" iface="eth0" netns="/var/run/netns/cni-6d75206d-2689-3eef-b6b2-deb0598243ff" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.740 [INFO][3888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" iface="eth0" netns="/var/run/netns/cni-6d75206d-2689-3eef-b6b2-deb0598243ff" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.745 [INFO][3888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" iface="eth0" netns="/var/run/netns/cni-6d75206d-2689-3eef-b6b2-deb0598243ff" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.745 [INFO][3888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.745 [INFO][3888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.947 [INFO][3914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.948 [INFO][3914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.948 [INFO][3914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.978 [WARNING][3914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.978 [INFO][3914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.984 [INFO][3914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:51.004582 containerd[1592]: 2025-08-13 00:17:50.990 [INFO][3888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:17:51.004582 containerd[1592]: time="2025-08-13T00:17:51.003333265Z" level=info msg="TearDown network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\" successfully" Aug 13 00:17:51.004582 containerd[1592]: time="2025-08-13T00:17:51.003410787Z" level=info msg="StopPodSandbox for \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\" returns successfully" Aug 13 00:17:51.033808 systemd[1]: run-netns-cni\x2d6d75206d\x2d2689\x2d3eef\x2db6b2\x2ddeb0598243ff.mount: Deactivated successfully. Aug 13 00:17:51.147997 kubelet[2741]: I0813 00:17:51.147190 2741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/984ef076-047a-48f9-82ef-15fe5f9ec37d-whisker-backend-key-pair\") pod \"984ef076-047a-48f9-82ef-15fe5f9ec37d\" (UID: \"984ef076-047a-48f9-82ef-15fe5f9ec37d\") " Aug 13 00:17:51.147997 kubelet[2741]: I0813 00:17:51.147345 2741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q92hb\" (UniqueName: \"kubernetes.io/projected/984ef076-047a-48f9-82ef-15fe5f9ec37d-kube-api-access-q92hb\") pod \"984ef076-047a-48f9-82ef-15fe5f9ec37d\" (UID: \"984ef076-047a-48f9-82ef-15fe5f9ec37d\") " Aug 13 00:17:51.147997 kubelet[2741]: I0813 00:17:51.147410 2741 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/984ef076-047a-48f9-82ef-15fe5f9ec37d-whisker-ca-bundle\") pod \"984ef076-047a-48f9-82ef-15fe5f9ec37d\" (UID: \"984ef076-047a-48f9-82ef-15fe5f9ec37d\") " Aug 13 00:17:51.148467 kubelet[2741]: I0813 00:17:51.148120 2741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/984ef076-047a-48f9-82ef-15fe5f9ec37d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "984ef076-047a-48f9-82ef-15fe5f9ec37d" (UID: "984ef076-047a-48f9-82ef-15fe5f9ec37d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:17:51.181250 kubelet[2741]: I0813 00:17:51.181101 2741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984ef076-047a-48f9-82ef-15fe5f9ec37d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "984ef076-047a-48f9-82ef-15fe5f9ec37d" (UID: "984ef076-047a-48f9-82ef-15fe5f9ec37d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:17:51.183287 kubelet[2741]: I0813 00:17:51.181194 2741 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/984ef076-047a-48f9-82ef-15fe5f9ec37d-kube-api-access-q92hb" (OuterVolumeSpecName: "kube-api-access-q92hb") pod "984ef076-047a-48f9-82ef-15fe5f9ec37d" (UID: "984ef076-047a-48f9-82ef-15fe5f9ec37d"). InnerVolumeSpecName "kube-api-access-q92hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:17:51.190910 systemd[1]: var-lib-kubelet-pods-984ef076\x2d047a\x2d48f9\x2d82ef\x2d15fe5f9ec37d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:17:51.205439 systemd[1]: var-lib-kubelet-pods-984ef076\x2d047a\x2d48f9\x2d82ef\x2d15fe5f9ec37d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq92hb.mount: Deactivated successfully. Aug 13 00:17:51.248438 kubelet[2741]: I0813 00:17:51.248367 2741 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/984ef076-047a-48f9-82ef-15fe5f9ec37d-whisker-ca-bundle\") on node \"ci-4081-3-5-0-684996fd0b\" DevicePath \"\"" Aug 13 00:17:51.248438 kubelet[2741]: I0813 00:17:51.248441 2741 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/984ef076-047a-48f9-82ef-15fe5f9ec37d-whisker-backend-key-pair\") on node \"ci-4081-3-5-0-684996fd0b\" DevicePath \"\"" Aug 13 00:17:51.248932 kubelet[2741]: I0813 00:17:51.248473 2741 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q92hb\" (UniqueName: \"kubernetes.io/projected/984ef076-047a-48f9-82ef-15fe5f9ec37d-kube-api-access-q92hb\") on node \"ci-4081-3-5-0-684996fd0b\" DevicePath \"\"" Aug 13 00:17:51.484940 systemd[1]: run-containerd-runc-k8s.io-77c9a39643da9bf33856483de4796e772446ef9964b475fc8ed334de02b815d5-runc.lyWpk1.mount: Deactivated successfully. Aug 13 00:17:51.677315 kubelet[2741]: I0813 00:17:51.677181 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72ca43d6-a77f-4bfa-9775-db67177b1871-whisker-ca-bundle\") pod \"whisker-6bbb989d69-n9r76\" (UID: \"72ca43d6-a77f-4bfa-9775-db67177b1871\") " pod="calico-system/whisker-6bbb989d69-n9r76" Aug 13 00:17:51.678015 kubelet[2741]: I0813 00:17:51.677839 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72ca43d6-a77f-4bfa-9775-db67177b1871-whisker-backend-key-pair\") pod \"whisker-6bbb989d69-n9r76\" (UID: \"72ca43d6-a77f-4bfa-9775-db67177b1871\") " pod="calico-system/whisker-6bbb989d69-n9r76" Aug 13 00:17:51.678015 kubelet[2741]: I0813 00:17:51.677933 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6b2l\" (UniqueName: \"kubernetes.io/projected/72ca43d6-a77f-4bfa-9775-db67177b1871-kube-api-access-v6b2l\") pod \"whisker-6bbb989d69-n9r76\" (UID: \"72ca43d6-a77f-4bfa-9775-db67177b1871\") " pod="calico-system/whisker-6bbb989d69-n9r76" Aug 13 00:17:51.859298 containerd[1592]: time="2025-08-13T00:17:51.858520502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bbb989d69-n9r76,Uid:72ca43d6-a77f-4bfa-9775-db67177b1871,Namespace:calico-system,Attempt:0,}" Aug 13 00:17:52.162676 systemd-networkd[1246]: cali2429ef959c6: Link UP Aug 13 00:17:52.163541 systemd-networkd[1246]: cali2429ef959c6: Gained carrier Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:51.939 [INFO][3960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:51.980 [INFO][3960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0 whisker-6bbb989d69- calico-system 72ca43d6-a77f-4bfa-9775-db67177b1871 958 0 2025-08-13 00:17:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bbb989d69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-0-684996fd0b whisker-6bbb989d69-n9r76 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2429ef959c6 [] [] }} ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Namespace="calico-system" Pod="whisker-6bbb989d69-n9r76" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:51.980 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Namespace="calico-system" Pod="whisker-6bbb989d69-n9r76" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.054 [INFO][3972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" HandleID="k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.055 [INFO][3972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" HandleID="k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-0-684996fd0b", "pod":"whisker-6bbb989d69-n9r76", "timestamp":"2025-08-13 00:17:52.054906254 +0000 UTC"}, Hostname:"ci-4081-3-5-0-684996fd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.055 [INFO][3972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.055 [INFO][3972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.055 [INFO][3972] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-0-684996fd0b' Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.080 [INFO][3972] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.091 [INFO][3972] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.101 [INFO][3972] ipam/ipam.go 511: Trying affinity for 192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.106 [INFO][3972] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.111 [INFO][3972] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.111 [INFO][3972] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.114 [INFO][3972] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0 Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.127 [INFO][3972] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.140 [INFO][3972] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.129/26] block=192.168.87.128/26 handle="k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.140 [INFO][3972] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.129/26] handle="k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.140 [INFO][3972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:52.201872 containerd[1592]: 2025-08-13 00:17:52.140 [INFO][3972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.129/26] IPv6=[] ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" HandleID="k8s-pod-network.096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" Aug 13 00:17:52.204154 containerd[1592]: 2025-08-13 00:17:52.146 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Namespace="calico-system" Pod="whisker-6bbb989d69-n9r76" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0", GenerateName:"whisker-6bbb989d69-", Namespace:"calico-system", SelfLink:"", UID:"72ca43d6-a77f-4bfa-9775-db67177b1871", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bbb989d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"", Pod:"whisker-6bbb989d69-n9r76", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2429ef959c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:52.204154 containerd[1592]: 2025-08-13 00:17:52.146 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.129/32] ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Namespace="calico-system" Pod="whisker-6bbb989d69-n9r76" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" Aug 13 00:17:52.204154 containerd[1592]: 2025-08-13 00:17:52.146 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2429ef959c6 ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Namespace="calico-system" Pod="whisker-6bbb989d69-n9r76" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" Aug 13 00:17:52.204154 containerd[1592]: 2025-08-13 00:17:52.165 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Namespace="calico-system" Pod="whisker-6bbb989d69-n9r76" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" Aug 13 00:17:52.204154 containerd[1592]: 2025-08-13 00:17:52.168 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Namespace="calico-system" Pod="whisker-6bbb989d69-n9r76" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0", GenerateName:"whisker-6bbb989d69-", Namespace:"calico-system", SelfLink:"", UID:"72ca43d6-a77f-4bfa-9775-db67177b1871", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bbb989d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0", Pod:"whisker-6bbb989d69-n9r76", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2429ef959c6", MAC:"1e:72:2e:52:6b:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:52.204154 containerd[1592]: 2025-08-13 00:17:52.197 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0" Namespace="calico-system" Pod="whisker-6bbb989d69-n9r76" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6bbb989d69--n9r76-eth0" Aug 13 00:17:52.240730 containerd[1592]: time="2025-08-13T00:17:52.240381371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:17:52.241114 containerd[1592]: time="2025-08-13T00:17:52.240801023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:17:52.242192 containerd[1592]: time="2025-08-13T00:17:52.241946216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:52.243009 containerd[1592]: time="2025-08-13T00:17:52.242844561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:52.351090 containerd[1592]: time="2025-08-13T00:17:52.350944597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bbb989d69-n9r76,Uid:72ca43d6-a77f-4bfa-9775-db67177b1871,Namespace:calico-system,Attempt:0,} returns sandbox id \"096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0\"" Aug 13 00:17:52.354672 containerd[1592]: time="2025-08-13T00:17:52.354599821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:17:53.368028 containerd[1592]: time="2025-08-13T00:17:53.365796701Z" level=info msg="StopPodSandbox for \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\"" Aug 13 00:17:53.368028 containerd[1592]: time="2025-08-13T00:17:53.366395198Z" level=info msg="StopPodSandbox for \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\"" Aug 13 00:17:53.378434 systemd-networkd[1246]: cali2429ef959c6: Gained IPv6LL Aug 13 00:17:53.421425 kubelet[2741]: I0813 00:17:53.413009 2741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984ef076-047a-48f9-82ef-15fe5f9ec37d" path="/var/lib/kubelet/pods/984ef076-047a-48f9-82ef-15fe5f9ec37d/volumes" Aug 13 00:17:54.116247 kernel: bpftool[4194]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:53.797 [INFO][4152] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:53.807 [INFO][4152] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" iface="eth0" netns="/var/run/netns/cni-482f89f2-d01e-4568-2920-8d7c2e561d24" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:53.816 [INFO][4152] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" iface="eth0" netns="/var/run/netns/cni-482f89f2-d01e-4568-2920-8d7c2e561d24" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:53.830 [INFO][4152] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" iface="eth0" netns="/var/run/netns/cni-482f89f2-d01e-4568-2920-8d7c2e561d24" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:53.839 [INFO][4152] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:53.839 [INFO][4152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:54.162 [INFO][4174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:54.162 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:54.162 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:54.204 [WARNING][4174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:54.204 [INFO][4174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:54.210 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:54.242460 containerd[1592]: 2025-08-13 00:17:54.223 [INFO][4152] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:17:54.247167 containerd[1592]: time="2025-08-13T00:17:54.245491146Z" level=info msg="TearDown network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\" successfully" Aug 13 00:17:54.247167 containerd[1592]: time="2025-08-13T00:17:54.245554228Z" level=info msg="StopPodSandbox for \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\" returns successfully" Aug 13 00:17:54.262072 systemd[1]: run-netns-cni\x2d482f89f2\x2dd01e\x2d4568\x2d2920\x2d8d7c2e561d24.mount: Deactivated successfully. Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:53.826 [INFO][4153] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:53.831 [INFO][4153] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" iface="eth0" netns="/var/run/netns/cni-eaaf5db0-5762-4626-85f2-0eb38ffaf7fb" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:53.832 [INFO][4153] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" iface="eth0" netns="/var/run/netns/cni-eaaf5db0-5762-4626-85f2-0eb38ffaf7fb" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:53.833 [INFO][4153] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" iface="eth0" netns="/var/run/netns/cni-eaaf5db0-5762-4626-85f2-0eb38ffaf7fb" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:53.833 [INFO][4153] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:53.833 [INFO][4153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:54.184 [INFO][4172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:54.190 [INFO][4172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:54.209 [INFO][4172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:54.254 [WARNING][4172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:54.255 [INFO][4172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:54.267 [INFO][4172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:54.325649 containerd[1592]: 2025-08-13 00:17:54.282 [INFO][4153] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:17:54.329563 containerd[1592]: time="2025-08-13T00:17:54.328970286Z" level=info msg="TearDown network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\" successfully" Aug 13 00:17:54.329563 containerd[1592]: time="2025-08-13T00:17:54.329048688Z" level=info msg="StopPodSandbox for \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\" returns successfully" Aug 13 00:17:54.336145 containerd[1592]: time="2025-08-13T00:17:54.335703444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzcsq,Uid:2fa714a0-45d3-4601-af8c-8a6aebd91ca9,Namespace:kube-system,Attempt:1,}" Aug 13 00:17:54.341890 containerd[1592]: time="2025-08-13T00:17:54.340686031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-75vg2,Uid:211992a4-e03b-407a-b6e8-049cb37a8c67,Namespace:calico-system,Attempt:1,}" Aug 13 00:17:54.355393 systemd[1]: run-netns-cni\x2deaaf5db0\x2d5762\x2d4626\x2d85f2\x2d0eb38ffaf7fb.mount: Deactivated successfully. Aug 13 00:17:54.370113 containerd[1592]: time="2025-08-13T00:17:54.369743327Z" level=info msg="StopPodSandbox for \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\"" Aug 13 00:17:54.381339 containerd[1592]: time="2025-08-13T00:17:54.380167354Z" level=info msg="StopPodSandbox for \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\"" Aug 13 00:17:55.241609 systemd-journald[1165]: Under memory pressure, flushing caches. Aug 13 00:17:55.235444 systemd-resolved[1487]: Under memory pressure, flushing caches. Aug 13 00:17:55.235538 systemd-resolved[1487]: Flushed all caches. Aug 13 00:17:55.485336 containerd[1592]: time="2025-08-13T00:17:55.483690106Z" level=info msg="StopPodSandbox for \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\"" Aug 13 00:17:55.496600 containerd[1592]: time="2025-08-13T00:17:55.494712316Z" level=info msg="StopPodSandbox for \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\"" Aug 13 00:17:55.560449 systemd-networkd[1246]: vxlan.calico: Link UP Aug 13 00:17:55.560465 systemd-networkd[1246]: vxlan.calico: Gained carrier Aug 13 00:17:56.032855 systemd-networkd[1246]: cali5cff7db1a58: Link UP Aug 13 00:17:56.037316 systemd-networkd[1246]: cali5cff7db1a58: Gained carrier Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.157 [INFO][4256] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0 goldmane-58fd7646b9- calico-system 211992a4-e03b-407a-b6e8-049cb37a8c67 972 0 2025-08-13 00:17:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-0-684996fd0b goldmane-58fd7646b9-75vg2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5cff7db1a58 [] [] }} ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Namespace="calico-system" Pod="goldmane-58fd7646b9-75vg2" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.157 [INFO][4256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Namespace="calico-system" Pod="goldmane-58fd7646b9-75vg2" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.704 [INFO][4291] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" HandleID="k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.746 [INFO][4291] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" HandleID="k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003058f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-0-684996fd0b", "pod":"goldmane-58fd7646b9-75vg2", "timestamp":"2025-08-13 00:17:55.704560801 +0000 UTC"}, Hostname:"ci-4081-3-5-0-684996fd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.746 [INFO][4291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.746 [INFO][4291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.747 [INFO][4291] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-0-684996fd0b' Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.816 [INFO][4291] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.842 [INFO][4291] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.870 [INFO][4291] ipam/ipam.go 511: Trying affinity for 192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.884 [INFO][4291] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.897 [INFO][4291] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.897 [INFO][4291] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.906 [INFO][4291] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.920 [INFO][4291] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.951 [INFO][4291] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.130/26] block=192.168.87.128/26 handle="k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.951 [INFO][4291] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.130/26] handle="k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.951 [INFO][4291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:56.144180 containerd[1592]: 2025-08-13 00:17:55.951 [INFO][4291] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.130/26] IPv6=[] ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" HandleID="k8s-pod-network.412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:56.150226 containerd[1592]: 2025-08-13 00:17:55.977 [INFO][4256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Namespace="calico-system" Pod="goldmane-58fd7646b9-75vg2" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"211992a4-e03b-407a-b6e8-049cb37a8c67", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"", Pod:"goldmane-58fd7646b9-75vg2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5cff7db1a58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.150226 containerd[1592]: 2025-08-13 00:17:55.978 [INFO][4256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.130/32] ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Namespace="calico-system" Pod="goldmane-58fd7646b9-75vg2" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:56.150226 containerd[1592]: 2025-08-13 00:17:55.978 [INFO][4256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cff7db1a58 ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Namespace="calico-system" Pod="goldmane-58fd7646b9-75vg2" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:56.150226 containerd[1592]: 2025-08-13 00:17:56.034 [INFO][4256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Namespace="calico-system" Pod="goldmane-58fd7646b9-75vg2" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:56.150226 containerd[1592]: 2025-08-13 00:17:56.048 [INFO][4256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Namespace="calico-system" Pod="goldmane-58fd7646b9-75vg2" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"211992a4-e03b-407a-b6e8-049cb37a8c67", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d", Pod:"goldmane-58fd7646b9-75vg2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5cff7db1a58", MAC:"fa:25:a2:46:35:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.150226 containerd[1592]: 2025-08-13 00:17:56.116 [INFO][4256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d" Namespace="calico-system" Pod="goldmane-58fd7646b9-75vg2" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:17:56.280490 systemd-networkd[1246]: cali34517fbf04f: Link UP Aug 13 00:17:56.286843 systemd-networkd[1246]: cali34517fbf04f: Gained carrier Aug 13 00:17:56.364917 containerd[1592]: time="2025-08-13T00:17:56.362565719Z" level=info msg="StopPodSandbox for \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\"" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:55.050 [INFO][4244] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0 coredns-7c65d6cfc9- kube-system 2fa714a0-45d3-4601-af8c-8a6aebd91ca9 974 0 2025-08-13 00:16:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-0-684996fd0b coredns-7c65d6cfc9-gzcsq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34517fbf04f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzcsq" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:55.050 [INFO][4244] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzcsq" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:55.836 [INFO][4278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" HandleID="k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:55.837 [INFO][4278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" HandleID="k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d1bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-0-684996fd0b", "pod":"coredns-7c65d6cfc9-gzcsq", "timestamp":"2025-08-13 00:17:55.83608594 +0000 UTC"}, Hostname:"ci-4081-3-5-0-684996fd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:55.837 [INFO][4278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:55.955 [INFO][4278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:55.959 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-0-684996fd0b' Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.043 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.087 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.135 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.146 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.166 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.167 [INFO][4278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.175 [INFO][4278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71 Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.214 [INFO][4278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.234 [INFO][4278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.131/26] block=192.168.87.128/26 handle="k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.234 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.131/26] handle="k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.234 [INFO][4278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:56.440289 containerd[1592]: 2025-08-13 00:17:56.234 [INFO][4278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.131/26] IPv6=[] ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" HandleID="k8s-pod-network.a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:56.443841 containerd[1592]: 2025-08-13 00:17:56.251 [INFO][4244] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzcsq" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2fa714a0-45d3-4601-af8c-8a6aebd91ca9", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"", Pod:"coredns-7c65d6cfc9-gzcsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34517fbf04f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.443841 containerd[1592]: 2025-08-13 00:17:56.252 [INFO][4244] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.131/32] ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzcsq" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:56.443841 containerd[1592]: 2025-08-13 00:17:56.252 [INFO][4244] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34517fbf04f ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzcsq" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:56.443841 containerd[1592]: 2025-08-13 00:17:56.293 [INFO][4244] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzcsq" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:56.443841 containerd[1592]: 2025-08-13 00:17:56.296 [INFO][4244] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzcsq" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2fa714a0-45d3-4601-af8c-8a6aebd91ca9", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71", Pod:"coredns-7c65d6cfc9-gzcsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34517fbf04f", MAC:"c6:db:01:79:94:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:56.443841 containerd[1592]: 2025-08-13 00:17:56.369 [INFO][4244] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gzcsq" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:55.130 [INFO][4235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:55.130 [INFO][4235] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" iface="eth0" netns="/var/run/netns/cni-de510725-2ad0-db70-d28e-f098979f8589" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:55.131 [INFO][4235] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" iface="eth0" netns="/var/run/netns/cni-de510725-2ad0-db70-d28e-f098979f8589" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:55.133 [INFO][4235] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" iface="eth0" netns="/var/run/netns/cni-de510725-2ad0-db70-d28e-f098979f8589" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:55.133 [INFO][4235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:55.133 [INFO][4235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:55.885 [INFO][4283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:55.890 [INFO][4283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:56.235 [INFO][4283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:56.384 [WARNING][4283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:56.386 [INFO][4283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:56.397 [INFO][4283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:56.512989 containerd[1592]: 2025-08-13 00:17:56.447 [INFO][4235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:17:56.523060 systemd[1]: run-netns-cni\x2dde510725\x2d2ad0\x2ddb70\x2dd28e\x2df098979f8589.mount: Deactivated successfully. Aug 13 00:17:56.547854 containerd[1592]: time="2025-08-13T00:17:56.545521724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:56.614535 containerd[1592]: time="2025-08-13T00:17:56.614438620Z" level=info msg="TearDown network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\" successfully" Aug 13 00:17:56.614535 containerd[1592]: time="2025-08-13T00:17:56.614513543Z" level=info msg="StopPodSandbox for \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\" returns successfully" Aug 13 00:17:56.634834 containerd[1592]: time="2025-08-13T00:17:56.634703197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:17:56.650117 containerd[1592]: time="2025-08-13T00:17:56.650046504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855f47cdff-5778s,Uid:660c42fe-0b74-4f87-a47c-3e9a64771e8c,Namespace:calico-system,Attempt:1,}" Aug 13 00:17:56.658916 containerd[1592]: time="2025-08-13T00:17:56.658566203Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:56.677496 containerd[1592]: time="2025-08-13T00:17:56.676724315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:17:56.677496 containerd[1592]: time="2025-08-13T00:17:56.676842479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:17:56.677496 containerd[1592]: time="2025-08-13T00:17:56.676884760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:56.677496 containerd[1592]: time="2025-08-13T00:17:56.677075366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:55.278 [INFO][4234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:55.278 [INFO][4234] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" iface="eth0" netns="/var/run/netns/cni-f7ef19b7-7318-87dc-4a11-d418130a24f5" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:55.279 [INFO][4234] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" iface="eth0" netns="/var/run/netns/cni-f7ef19b7-7318-87dc-4a11-d418130a24f5" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:55.345 [INFO][4234] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" iface="eth0" netns="/var/run/netns/cni-f7ef19b7-7318-87dc-4a11-d418130a24f5" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:55.345 [INFO][4234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:55.345 [INFO][4234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:55.975 [INFO][4309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:55.976 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:56.406 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:56.508 [WARNING][4309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:56.509 [INFO][4309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:56.542 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:56.694982 containerd[1592]: 2025-08-13 00:17:56.653 [INFO][4234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:17:56.713538 containerd[1592]: time="2025-08-13T00:17:56.712977778Z" level=info msg="TearDown network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\" successfully" Aug 13 00:17:56.716491 containerd[1592]: time="2025-08-13T00:17:56.716072552Z" level=info msg="StopPodSandbox for \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\" returns successfully" Aug 13 00:17:56.721281 containerd[1592]: time="2025-08-13T00:17:56.720635571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d645b7db-qg8sr,Uid:2f19c543-3502-4177-93bc-f402734db516,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:17:56.727523 systemd[1]: run-netns-cni\x2df7ef19b7\x2d7318\x2d87dc\x2d4a11\x2dd418130a24f5.mount: Deactivated successfully. Aug 13 00:17:56.879781 containerd[1592]: time="2025-08-13T00:17:56.878512373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:17:56.899123 containerd[1592]: time="2025-08-13T00:17:56.898439259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 4.543756836s" Aug 13 00:17:56.899123 containerd[1592]: time="2025-08-13T00:17:56.898522342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:17:56.969879 containerd[1592]: time="2025-08-13T00:17:56.969175651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:17:56.969879 containerd[1592]: time="2025-08-13T00:17:56.969324856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:17:56.969879 containerd[1592]: time="2025-08-13T00:17:56.969444499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:56.969879 containerd[1592]: time="2025-08-13T00:17:56.969692507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:56.972724 containerd[1592]: time="2025-08-13T00:17:56.970101279Z" level=info msg="CreateContainer within sandbox \"096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:17:57.198460 containerd[1592]: time="2025-08-13T00:17:57.197437164Z" level=info msg="CreateContainer within sandbox \"096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"233f01e8f584a1375d4c24715f14c25658eb9001a0a6c6b5905c418c35f019b6\"" Aug 13 00:17:57.200959 containerd[1592]: time="2025-08-13T00:17:57.200768507Z" level=info msg="StartContainer for \"233f01e8f584a1375d4c24715f14c25658eb9001a0a6c6b5905c418c35f019b6\"" Aug 13 00:17:57.282137 systemd-networkd[1246]: vxlan.calico: Gained IPv6LL Aug 13 00:17:57.346862 systemd-networkd[1246]: cali34517fbf04f: Gained IPv6LL Aug 13 00:17:57.370456 containerd[1592]: time="2025-08-13T00:17:57.369339671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-75vg2,Uid:211992a4-e03b-407a-b6e8-049cb37a8c67,Namespace:calico-system,Attempt:1,} returns sandbox id \"412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d\"" Aug 13 00:17:57.389703 containerd[1592]: time="2025-08-13T00:17:57.389271247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:56.354 [INFO][4349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:56.364 [INFO][4349] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" iface="eth0" netns="/var/run/netns/cni-9ceb35a3-96f6-5ff4-3804-a5a6a0901afe" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:56.368 [INFO][4349] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" iface="eth0" netns="/var/run/netns/cni-9ceb35a3-96f6-5ff4-3804-a5a6a0901afe" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:56.374 [INFO][4349] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" iface="eth0" netns="/var/run/netns/cni-9ceb35a3-96f6-5ff4-3804-a5a6a0901afe" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:56.374 [INFO][4349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:56.374 [INFO][4349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:57.244 [INFO][4409] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:57.244 [INFO][4409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:57.244 [INFO][4409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:57.337 [WARNING][4409] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:57.337 [INFO][4409] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:57.344 [INFO][4409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:57.421322 containerd[1592]: 2025-08-13 00:17:57.401 [INFO][4349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:17:57.436243 containerd[1592]: time="2025-08-13T00:17:57.433283846Z" level=info msg="TearDown network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\" successfully" Aug 13 00:17:57.440578 containerd[1592]: time="2025-08-13T00:17:57.440112896Z" level=info msg="StopPodSandbox for \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\" returns successfully" Aug 13 00:17:57.476059 systemd-networkd[1246]: cali5cff7db1a58: Gained IPv6LL Aug 13 00:17:57.482490 containerd[1592]: time="2025-08-13T00:17:57.481934188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nf8md,Uid:6428bddd-411d-486c-a798-4f373ec4640c,Namespace:kube-system,Attempt:1,}" Aug 13 00:17:57.540846 systemd[1]: run-netns-cni\x2d9ceb35a3\x2d96f6\x2d5ff4\x2d3804\x2da5a6a0901afe.mount: Deactivated successfully. Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:56.593 [INFO][4358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:56.594 [INFO][4358] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" iface="eth0" netns="/var/run/netns/cni-f53d12f2-b993-d29c-1f83-834430a30613" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:56.596 [INFO][4358] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" iface="eth0" netns="/var/run/netns/cni-f53d12f2-b993-d29c-1f83-834430a30613" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:56.618 [INFO][4358] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" iface="eth0" netns="/var/run/netns/cni-f53d12f2-b993-d29c-1f83-834430a30613" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:56.618 [INFO][4358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:56.618 [INFO][4358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:57.318 [INFO][4442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:57.340 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:57.344 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:57.431 [WARNING][4442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:57.432 [INFO][4442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:57.448 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:57.562486 containerd[1592]: 2025-08-13 00:17:57.494 [INFO][4358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:17:57.574709 systemd[1]: run-netns-cni\x2df53d12f2\x2db993\x2dd29c\x2d1f83\x2d834430a30613.mount: Deactivated successfully. Aug 13 00:17:57.593467 containerd[1592]: time="2025-08-13T00:17:57.593018817Z" level=info msg="TearDown network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\" successfully" Aug 13 00:17:57.594484 containerd[1592]: time="2025-08-13T00:17:57.594322537Z" level=info msg="StopPodSandbox for \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\" returns successfully" Aug 13 00:17:57.625717 containerd[1592]: time="2025-08-13T00:17:57.625163010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d645b7db-rhp59,Uid:960cd058-0fa0-479e-b7da-fdbdd01280da,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.068 [INFO][4431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.079 [INFO][4431] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" iface="eth0" netns="/var/run/netns/cni-19d6886c-67f4-3c74-3295-47f84982780b" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.080 [INFO][4431] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" iface="eth0" netns="/var/run/netns/cni-19d6886c-67f4-3c74-3295-47f84982780b" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.087 [INFO][4431] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" iface="eth0" netns="/var/run/netns/cni-19d6886c-67f4-3c74-3295-47f84982780b" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.087 [INFO][4431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.087 [INFO][4431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.599 [INFO][4517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.614 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.624 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.735 [WARNING][4517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.738 [INFO][4517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.748 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:57.784704 containerd[1592]: 2025-08-13 00:17:57.757 [INFO][4431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:17:57.792415 containerd[1592]: time="2025-08-13T00:17:57.792321810Z" level=info msg="TearDown network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\" successfully" Aug 13 00:17:57.797795 systemd[1]: run-netns-cni\x2d19d6886c\x2d67f4\x2d3c74\x2d3295\x2d47f84982780b.mount: Deactivated successfully. Aug 13 00:17:57.809270 containerd[1592]: time="2025-08-13T00:17:57.809158650Z" level=info msg="StopPodSandbox for \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\" returns successfully" Aug 13 00:17:57.809789 containerd[1592]: time="2025-08-13T00:17:57.804881638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gzcsq,Uid:2fa714a0-45d3-4601-af8c-8a6aebd91ca9,Namespace:kube-system,Attempt:1,} returns sandbox id \"a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71\"" Aug 13 00:17:57.821933 containerd[1592]: time="2025-08-13T00:17:57.813579387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hw4j,Uid:851fc6af-b9af-4d67-92e5-4dcf6cbec03a,Namespace:calico-system,Attempt:1,}" Aug 13 00:17:57.826632 containerd[1592]: time="2025-08-13T00:17:57.826459304Z" level=info msg="StartContainer for \"233f01e8f584a1375d4c24715f14c25658eb9001a0a6c6b5905c418c35f019b6\" returns successfully" Aug 13 00:17:57.846470 containerd[1592]: time="2025-08-13T00:17:57.845496252Z" level=info msg="CreateContainer within sandbox \"a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:17:58.128354 containerd[1592]: time="2025-08-13T00:17:58.128084912Z" level=info msg="CreateContainer within sandbox \"a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0915dd37bda44b80198783d85338929d0a1ea4fe985cfcacfc44c0d569ac231b\"" Aug 13 00:17:58.134288 containerd[1592]: time="2025-08-13T00:17:58.133100710Z" level=info msg="StartContainer for \"0915dd37bda44b80198783d85338929d0a1ea4fe985cfcacfc44c0d569ac231b\"" Aug 13 00:17:58.289729 systemd-networkd[1246]: cali47afaa4e66f: Link UP Aug 13 00:17:58.311892 systemd-networkd[1246]: cali47afaa4e66f: Gained carrier Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:57.750 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0 calico-apiserver-79d645b7db- calico-apiserver 2f19c543-3502-4177-93bc-f402734db516 979 0 2025-08-13 00:17:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d645b7db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-0-684996fd0b calico-apiserver-79d645b7db-qg8sr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali47afaa4e66f [] [] }} ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-qg8sr" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:57.751 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-qg8sr" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:57.977 [INFO][4618] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" HandleID="k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:57.977 [INFO][4618] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" HandleID="k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005ca2b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-0-684996fd0b", "pod":"calico-apiserver-79d645b7db-qg8sr", "timestamp":"2025-08-13 00:17:57.975757913 +0000 UTC"}, Hostname:"ci-4081-3-5-0-684996fd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:57.978 [INFO][4618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:57.978 [INFO][4618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:57.981 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-0-684996fd0b' Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.038 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.059 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.090 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.098 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.119 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.125 [INFO][4618] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.137 [INFO][4618] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.161 [INFO][4618] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.198 [INFO][4618] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.132/26] block=192.168.87.128/26 handle="k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.198 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.132/26] handle="k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.202 [INFO][4618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:58.506544 containerd[1592]: 2025-08-13 00:17:58.209 [INFO][4618] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.132/26] IPv6=[] ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" HandleID="k8s-pod-network.63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:58.516708 containerd[1592]: 2025-08-13 00:17:58.258 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-qg8sr" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0", GenerateName:"calico-apiserver-79d645b7db-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f19c543-3502-4177-93bc-f402734db516", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d645b7db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"", Pod:"calico-apiserver-79d645b7db-qg8sr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali47afaa4e66f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:58.516708 containerd[1592]: 2025-08-13 00:17:58.259 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.132/32] ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-qg8sr" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:58.516708 containerd[1592]: 2025-08-13 00:17:58.259 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47afaa4e66f ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-qg8sr" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:58.516708 containerd[1592]: 2025-08-13 00:17:58.388 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-qg8sr" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:58.516708 containerd[1592]: 2025-08-13 00:17:58.395 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-qg8sr" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0", GenerateName:"calico-apiserver-79d645b7db-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f19c543-3502-4177-93bc-f402734db516", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d645b7db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d", Pod:"calico-apiserver-79d645b7db-qg8sr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali47afaa4e66f", MAC:"16:f6:07:50:bc:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:58.516708 containerd[1592]: 2025-08-13 00:17:58.442 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-qg8sr" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:17:58.663617 systemd[1]: run-containerd-runc-k8s.io-0915dd37bda44b80198783d85338929d0a1ea4fe985cfcacfc44c0d569ac231b-runc.sFLoTs.mount: Deactivated successfully. Aug 13 00:17:58.775348 systemd-networkd[1246]: cali9f12c3e2a27: Link UP Aug 13 00:17:58.775913 systemd-networkd[1246]: cali9f12c3e2a27: Gained carrier Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:57.722 [INFO][4483] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0 calico-kube-controllers-855f47cdff- calico-system 660c42fe-0b74-4f87-a47c-3e9a64771e8c 978 0 2025-08-13 00:17:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:855f47cdff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-0-684996fd0b calico-kube-controllers-855f47cdff-5778s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9f12c3e2a27 [] [] }} ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Namespace="calico-system" Pod="calico-kube-controllers-855f47cdff-5778s" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:57.723 [INFO][4483] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Namespace="calico-system" Pod="calico-kube-controllers-855f47cdff-5778s" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:57.984 [INFO][4613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" HandleID="k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:57.985 [INFO][4613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" HandleID="k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400025bb60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-0-684996fd0b", "pod":"calico-kube-controllers-855f47cdff-5778s", "timestamp":"2025-08-13 00:17:57.982652406 +0000 UTC"}, Hostname:"ci-4081-3-5-0-684996fd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:57.986 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.200 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.205 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-0-684996fd0b' Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.327 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.400 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.440 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.455 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.490 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.505 [INFO][4613] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.520 [INFO][4613] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0 Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.567 [INFO][4613] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.605 [INFO][4613] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.133/26] block=192.168.87.128/26 handle="k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.605 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.133/26] handle="k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.605 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:58.922147 containerd[1592]: 2025-08-13 00:17:58.605 [INFO][4613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.133/26] IPv6=[] ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" HandleID="k8s-pod-network.d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:58.930193 containerd[1592]: 2025-08-13 00:17:58.693 [INFO][4483] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Namespace="calico-system" Pod="calico-kube-controllers-855f47cdff-5778s" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0", GenerateName:"calico-kube-controllers-855f47cdff-", Namespace:"calico-system", SelfLink:"", UID:"660c42fe-0b74-4f87-a47c-3e9a64771e8c", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855f47cdff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"", Pod:"calico-kube-controllers-855f47cdff-5778s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f12c3e2a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:58.930193 containerd[1592]: 2025-08-13 00:17:58.694 [INFO][4483] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.133/32] ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Namespace="calico-system" Pod="calico-kube-controllers-855f47cdff-5778s" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:58.930193 containerd[1592]: 2025-08-13 00:17:58.694 [INFO][4483] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f12c3e2a27 ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Namespace="calico-system" Pod="calico-kube-controllers-855f47cdff-5778s" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:58.930193 containerd[1592]: 2025-08-13 00:17:58.790 [INFO][4483] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Namespace="calico-system" Pod="calico-kube-controllers-855f47cdff-5778s" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:58.930193 containerd[1592]: 2025-08-13 00:17:58.809 [INFO][4483] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Namespace="calico-system" Pod="calico-kube-controllers-855f47cdff-5778s" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0", GenerateName:"calico-kube-controllers-855f47cdff-", Namespace:"calico-system", SelfLink:"", UID:"660c42fe-0b74-4f87-a47c-3e9a64771e8c", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855f47cdff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0", Pod:"calico-kube-controllers-855f47cdff-5778s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f12c3e2a27", MAC:"ae:27:12:1e:8f:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:58.930193 containerd[1592]: 2025-08-13 00:17:58.847 [INFO][4483] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0" Namespace="calico-system" Pod="calico-kube-controllers-855f47cdff-5778s" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:17:59.046829 containerd[1592]: time="2025-08-13T00:17:59.042387080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:17:59.055832 containerd[1592]: time="2025-08-13T00:17:59.044314861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:17:59.057375 containerd[1592]: time="2025-08-13T00:17:59.056740495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:59.061139 containerd[1592]: time="2025-08-13T00:17:59.061028311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:59.237048 containerd[1592]: time="2025-08-13T00:17:59.235165878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:17:59.237048 containerd[1592]: time="2025-08-13T00:17:59.235337724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:17:59.237048 containerd[1592]: time="2025-08-13T00:17:59.235379605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:59.237048 containerd[1592]: time="2025-08-13T00:17:59.235620773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:17:59.251066 containerd[1592]: time="2025-08-13T00:17:59.247155339Z" level=info msg="StartContainer for \"0915dd37bda44b80198783d85338929d0a1ea4fe985cfcacfc44c0d569ac231b\" returns successfully" Aug 13 00:17:59.651363 systemd-networkd[1246]: cali47afaa4e66f: Gained IPv6LL Aug 13 00:17:59.752795 systemd-networkd[1246]: calia156e7fbc89: Link UP Aug 13 00:17:59.762908 systemd-networkd[1246]: calia156e7fbc89: Gained carrier Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:58.420 [INFO][4634] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0 coredns-7c65d6cfc9- kube-system 6428bddd-411d-486c-a798-4f373ec4640c 991 0 2025-08-13 00:16:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-0-684996fd0b coredns-7c65d6cfc9-nf8md eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia156e7fbc89 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf8md" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:58.422 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf8md" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.167 [INFO][4684] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" HandleID="k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.167 [INFO][4684] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" HandleID="k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bc830), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-0-684996fd0b", "pod":"coredns-7c65d6cfc9-nf8md", "timestamp":"2025-08-13 00:17:59.164815205 +0000 UTC"}, Hostname:"ci-4081-3-5-0-684996fd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.167 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.167 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.167 [INFO][4684] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-0-684996fd0b' Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.227 [INFO][4684] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.297 [INFO][4684] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.398 [INFO][4684] ipam/ipam.go 511: Trying affinity for 192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.471 [INFO][4684] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.501 [INFO][4684] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.501 [INFO][4684] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.513 [INFO][4684] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.577 [INFO][4684] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.632 [INFO][4684] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.134/26] block=192.168.87.128/26 handle="k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.632 [INFO][4684] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.134/26] handle="k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.632 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:17:59.936078 containerd[1592]: 2025-08-13 00:17:59.632 [INFO][4684] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.134/26] IPv6=[] ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" HandleID="k8s-pod-network.cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:59.943029 containerd[1592]: 2025-08-13 00:17:59.693 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf8md" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6428bddd-411d-486c-a798-4f373ec4640c", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"", Pod:"coredns-7c65d6cfc9-nf8md", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia156e7fbc89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:59.943029 containerd[1592]: 2025-08-13 00:17:59.694 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.134/32] ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf8md" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:59.943029 containerd[1592]: 2025-08-13 00:17:59.694 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia156e7fbc89 ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf8md" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:59.943029 containerd[1592]: 2025-08-13 00:17:59.777 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf8md" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:17:59.943029 containerd[1592]: 2025-08-13 00:17:59.807 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf8md" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6428bddd-411d-486c-a798-4f373ec4640c", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d", Pod:"coredns-7c65d6cfc9-nf8md", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia156e7fbc89", MAC:"32:df:60:34:4c:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:17:59.943029 containerd[1592]: 2025-08-13 00:17:59.865 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf8md" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:18:00.017383 containerd[1592]: time="2025-08-13T00:18:00.017024700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d645b7db-qg8sr,Uid:2f19c543-3502-4177-93bc-f402734db516,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d\"" Aug 13 00:18:00.143182 kubelet[2741]: I0813 00:18:00.140475 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gzcsq" podStartSLOduration=65.140351505 podStartE2EDuration="1m5.140351505s" podCreationTimestamp="2025-08-13 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:18:00.122129439 +0000 UTC m=+71.389854043" watchObservedRunningTime="2025-08-13 00:18:00.140351505 +0000 UTC m=+71.408076229" Aug 13 00:18:00.152667 containerd[1592]: time="2025-08-13T00:18:00.142788224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:00.152667 containerd[1592]: time="2025-08-13T00:18:00.149698246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:00.152667 containerd[1592]: time="2025-08-13T00:18:00.149747167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:00.152667 containerd[1592]: time="2025-08-13T00:18:00.149989495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:00.202316 systemd-networkd[1246]: calid402a4b462e: Link UP Aug 13 00:18:00.216998 systemd-networkd[1246]: calid402a4b462e: Gained carrier Aug 13 00:18:00.529049 systemd-networkd[1246]: calicd607f504f2: Link UP Aug 13 00:18:00.546433 systemd-networkd[1246]: calicd607f504f2: Gained carrier Aug 13 00:18:00.613357 systemd-networkd[1246]: cali9f12c3e2a27: Gained IPv6LL Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:58.966 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0 calico-apiserver-79d645b7db- calico-apiserver 960cd058-0fa0-479e-b7da-fdbdd01280da 995 0 2025-08-13 00:17:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d645b7db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-0-684996fd0b calico-apiserver-79d645b7db-rhp59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid402a4b462e [] [] }} ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-rhp59" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:58.967 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-rhp59" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.546 [INFO][4742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" HandleID="k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.547 [INFO][4742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" HandleID="k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-0-684996fd0b", "pod":"calico-apiserver-79d645b7db-rhp59", "timestamp":"2025-08-13 00:17:59.546561362 +0000 UTC"}, Hostname:"ci-4081-3-5-0-684996fd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.547 [INFO][4742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.632 [INFO][4742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.641 [INFO][4742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-0-684996fd0b' Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.802 [INFO][4742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.871 [INFO][4742] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.953 [INFO][4742] ipam/ipam.go 511: Trying affinity for 192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.961 [INFO][4742] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.974 [INFO][4742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.974 [INFO][4742] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:17:59.985 [INFO][4742] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:18:00.007 [INFO][4742] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:18:00.062 [INFO][4742] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.135/26] block=192.168.87.128/26 handle="k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:18:00.062 [INFO][4742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.135/26] handle="k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:18:00.062 [INFO][4742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:00.804023 containerd[1592]: 2025-08-13 00:18:00.062 [INFO][4742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.135/26] IPv6=[] ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" HandleID="k8s-pod-network.89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:00.807928 containerd[1592]: 2025-08-13 00:18:00.157 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-rhp59" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0", GenerateName:"calico-apiserver-79d645b7db-", Namespace:"calico-apiserver", SelfLink:"", UID:"960cd058-0fa0-479e-b7da-fdbdd01280da", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d645b7db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"", Pod:"calico-apiserver-79d645b7db-rhp59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid402a4b462e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:00.807928 containerd[1592]: 2025-08-13 00:18:00.158 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.135/32] ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-rhp59" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:00.807928 containerd[1592]: 2025-08-13 00:18:00.158 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid402a4b462e ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-rhp59" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:00.807928 containerd[1592]: 2025-08-13 00:18:00.242 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-rhp59" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:00.807928 containerd[1592]: 2025-08-13 00:18:00.260 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-rhp59" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0", GenerateName:"calico-apiserver-79d645b7db-", Namespace:"calico-apiserver", SelfLink:"", UID:"960cd058-0fa0-479e-b7da-fdbdd01280da", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d645b7db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f", Pod:"calico-apiserver-79d645b7db-rhp59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid402a4b462e", MAC:"a2:0e:28:39:64:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:00.807928 containerd[1592]: 2025-08-13 00:18:00.345 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f" Namespace="calico-apiserver" Pod="calico-apiserver-79d645b7db-rhp59" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:17:59.122 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0 csi-node-driver- calico-system 851fc6af-b9af-4d67-92e5-4dcf6cbec03a 997 0 2025-08-13 00:17:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-0-684996fd0b csi-node-driver-6hw4j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd607f504f2 [] [] }} ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Namespace="calico-system" Pod="csi-node-driver-6hw4j" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:17:59.122 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Namespace="calico-system" Pod="csi-node-driver-6hw4j" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:17:59.961 [INFO][4763] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" HandleID="k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:17:59.967 [INFO][4763] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" HandleID="k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-0-684996fd0b", "pod":"csi-node-driver-6hw4j", "timestamp":"2025-08-13 00:17:59.961798141 +0000 UTC"}, Hostname:"ci-4081-3-5-0-684996fd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:17:59.970 [INFO][4763] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.073 [INFO][4763] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.073 [INFO][4763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-0-684996fd0b' Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.151 [INFO][4763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.233 [INFO][4763] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.293 [INFO][4763] ipam/ipam.go 511: Trying affinity for 192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.327 [INFO][4763] ipam/ipam.go 158: Attempting to load block cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.344 [INFO][4763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.87.128/26 host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.347 [INFO][4763] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.87.128/26 handle="k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.361 [INFO][4763] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2 Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.387 [INFO][4763] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.87.128/26 handle="k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.441 [INFO][4763] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.87.136/26] block=192.168.87.128/26 handle="k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.441 [INFO][4763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.87.136/26] handle="k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" host="ci-4081-3-5-0-684996fd0b" Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.442 [INFO][4763] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:00.829803 containerd[1592]: 2025-08-13 00:18:00.444 [INFO][4763] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.87.136/26] IPv6=[] ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" HandleID="k8s-pod-network.03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:00.836804 containerd[1592]: 2025-08-13 00:18:00.478 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Namespace="calico-system" Pod="csi-node-driver-6hw4j" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"851fc6af-b9af-4d67-92e5-4dcf6cbec03a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"", Pod:"csi-node-driver-6hw4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd607f504f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:00.836804 containerd[1592]: 2025-08-13 00:18:00.480 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.136/32] ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Namespace="calico-system" Pod="csi-node-driver-6hw4j" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:00.836804 containerd[1592]: 2025-08-13 00:18:00.482 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd607f504f2 ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Namespace="calico-system" Pod="csi-node-driver-6hw4j" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:00.836804 containerd[1592]: 2025-08-13 00:18:00.561 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Namespace="calico-system" Pod="csi-node-driver-6hw4j" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:00.836804 containerd[1592]: 2025-08-13 00:18:00.570 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Namespace="calico-system" Pod="csi-node-driver-6hw4j" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"851fc6af-b9af-4d67-92e5-4dcf6cbec03a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2", Pod:"csi-node-driver-6hw4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd607f504f2", MAC:"4a:f9:ed:bd:5f:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:00.836804 containerd[1592]: 2025-08-13 00:18:00.699 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2" Namespace="calico-system" Pod="csi-node-driver-6hw4j" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:00.973842 containerd[1592]: time="2025-08-13T00:18:00.973726059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nf8md,Uid:6428bddd-411d-486c-a798-4f373ec4640c,Namespace:kube-system,Attempt:1,} returns sandbox id \"cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d\"" Aug 13 00:18:01.126073 systemd-networkd[1246]: calia156e7fbc89: Gained IPv6LL Aug 13 00:18:01.178286 containerd[1592]: time="2025-08-13T00:18:01.177930535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855f47cdff-5778s,Uid:660c42fe-0b74-4f87-a47c-3e9a64771e8c,Namespace:calico-system,Attempt:1,} returns sandbox id \"d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0\"" Aug 13 00:18:01.201352 containerd[1592]: time="2025-08-13T00:18:01.199912131Z" level=info msg="CreateContainer within sandbox \"cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:18:01.252335 systemd-networkd[1246]: calid402a4b462e: Gained IPv6LL Aug 13 00:18:01.347347 containerd[1592]: time="2025-08-13T00:18:01.284734732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:01.347347 containerd[1592]: time="2025-08-13T00:18:01.284853576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:01.347347 containerd[1592]: time="2025-08-13T00:18:01.284896657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:01.347347 containerd[1592]: time="2025-08-13T00:18:01.285073103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:01.360602 containerd[1592]: time="2025-08-13T00:18:01.314901594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:18:01.360602 containerd[1592]: time="2025-08-13T00:18:01.315010837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:18:01.360602 containerd[1592]: time="2025-08-13T00:18:01.315050359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:01.360602 containerd[1592]: time="2025-08-13T00:18:01.331777223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:18:01.381911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2786762695.mount: Deactivated successfully. Aug 13 00:18:01.492832 containerd[1592]: time="2025-08-13T00:18:01.492721302Z" level=info msg="CreateContainer within sandbox \"cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9a72c0259de7ee24854e5f430801eb5da781e8c9bebf63b4781c189d3de1a42\"" Aug 13 00:18:01.523146 containerd[1592]: time="2025-08-13T00:18:01.522743999Z" level=info msg="StartContainer for \"b9a72c0259de7ee24854e5f430801eb5da781e8c9bebf63b4781c189d3de1a42\"" Aug 13 00:18:02.037169 containerd[1592]: time="2025-08-13T00:18:02.037075355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6hw4j,Uid:851fc6af-b9af-4d67-92e5-4dcf6cbec03a,Namespace:calico-system,Attempt:1,} returns sandbox id \"03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2\"" Aug 13 00:18:02.304695 containerd[1592]: time="2025-08-13T00:18:02.304334678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d645b7db-rhp59,Uid:960cd058-0fa0-479e-b7da-fdbdd01280da,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f\"" Aug 13 00:18:02.385000 containerd[1592]: time="2025-08-13T00:18:02.383603729Z" level=info msg="StartContainer for \"b9a72c0259de7ee24854e5f430801eb5da781e8c9bebf63b4781c189d3de1a42\" returns successfully" Aug 13 00:18:02.467472 systemd-networkd[1246]: calicd607f504f2: Gained IPv6LL Aug 13 00:18:02.652491 kubelet[2741]: I0813 00:18:02.646181 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nf8md" podStartSLOduration=67.645086582 podStartE2EDuration="1m7.645086582s" podCreationTimestamp="2025-08-13 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:18:02.643642775 +0000 UTC m=+73.911367459" watchObservedRunningTime="2025-08-13 00:18:02.645086582 +0000 UTC m=+73.912811226" Aug 13 00:18:03.936718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195855659.mount: Deactivated successfully. Aug 13 00:18:05.834229 containerd[1592]: time="2025-08-13T00:18:05.834100765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:18:05.882555 containerd[1592]: time="2025-08-13T00:18:05.882416489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 8.493068479s" Aug 13 00:18:05.882555 containerd[1592]: time="2025-08-13T00:18:05.882500572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:18:05.886912 containerd[1592]: time="2025-08-13T00:18:05.886718396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:18:05.893714 containerd[1592]: time="2025-08-13T00:18:05.891994335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:05.900253 containerd[1592]: time="2025-08-13T00:18:05.896510889Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:05.900253 containerd[1592]: time="2025-08-13T00:18:05.897442080Z" level=info msg="CreateContainer within sandbox \"412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:18:05.904023 containerd[1592]: time="2025-08-13T00:18:05.903941862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:05.937828 containerd[1592]: time="2025-08-13T00:18:05.937726291Z" level=info msg="CreateContainer within sandbox \"412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3504e4edb445a952a747f178773c8b546f0525ac6180268a2a0f45b1dbf5b4f7\"" Aug 13 00:18:05.943980 containerd[1592]: time="2025-08-13T00:18:05.943814338Z" level=info msg="StartContainer for \"3504e4edb445a952a747f178773c8b546f0525ac6180268a2a0f45b1dbf5b4f7\"" Aug 13 00:18:06.170161 containerd[1592]: time="2025-08-13T00:18:06.169935691Z" level=info msg="StartContainer for \"3504e4edb445a952a747f178773c8b546f0525ac6180268a2a0f45b1dbf5b4f7\" returns successfully" Aug 13 00:18:06.735262 kubelet[2741]: I0813 00:18:06.728282 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-75vg2" podStartSLOduration=31.231910891 podStartE2EDuration="39.72825064s" podCreationTimestamp="2025-08-13 00:17:27 +0000 UTC" firstStartedPulling="2025-08-13 00:17:57.388437981 +0000 UTC m=+68.656162625" lastFinishedPulling="2025-08-13 00:18:05.88477777 +0000 UTC m=+77.152502374" observedRunningTime="2025-08-13 00:18:06.722549164 +0000 UTC m=+77.990273808" watchObservedRunningTime="2025-08-13 00:18:06.72825064 +0000 UTC m=+77.995975244" Aug 13 00:18:10.346424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685909069.mount: Deactivated successfully. Aug 13 00:18:10.370713 containerd[1592]: time="2025-08-13T00:18:10.370589514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:10.381253 containerd[1592]: time="2025-08-13T00:18:10.377661806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:18:10.381253 containerd[1592]: time="2025-08-13T00:18:10.377850853Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:10.387242 containerd[1592]: time="2025-08-13T00:18:10.386036905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:10.395863 containerd[1592]: time="2025-08-13T00:18:10.392695142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.50580894s" Aug 13 00:18:10.395863 containerd[1592]: time="2025-08-13T00:18:10.393009193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:18:10.399350 containerd[1592]: time="2025-08-13T00:18:10.397894607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:18:10.406089 containerd[1592]: time="2025-08-13T00:18:10.406022417Z" level=info msg="CreateContainer within sandbox \"096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:18:10.562312 containerd[1592]: time="2025-08-13T00:18:10.561845329Z" level=info msg="CreateContainer within sandbox \"096ade79ce83b64fe67edb963d90cb9ee52b67d0d8082e82154d60d8639c2ae0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0a67612705f316478855d1fa99f824f45aadde4cd1d7a6d7094439d68674b69d\"" Aug 13 00:18:10.565237 containerd[1592]: time="2025-08-13T00:18:10.564038487Z" level=info msg="StartContainer for \"0a67612705f316478855d1fa99f824f45aadde4cd1d7a6d7094439d68674b69d\"" Aug 13 00:18:10.884828 containerd[1592]: time="2025-08-13T00:18:10.884711913Z" level=info msg="StartContainer for \"0a67612705f316478855d1fa99f824f45aadde4cd1d7a6d7094439d68674b69d\" returns successfully" Aug 13 00:18:11.753076 kubelet[2741]: I0813 00:18:11.752948 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bbb989d69-n9r76" podStartSLOduration=2.712051572 podStartE2EDuration="20.752918027s" podCreationTimestamp="2025-08-13 00:17:51 +0000 UTC" firstStartedPulling="2025-08-13 00:17:52.353995724 +0000 UTC m=+63.621720328" lastFinishedPulling="2025-08-13 00:18:10.394862179 +0000 UTC m=+81.662586783" observedRunningTime="2025-08-13 00:18:11.746731484 +0000 UTC m=+83.014456128" watchObservedRunningTime="2025-08-13 00:18:11.752918027 +0000 UTC m=+83.020642631" Aug 13 00:18:15.575276 containerd[1592]: time="2025-08-13T00:18:15.574538180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:15.578736 containerd[1592]: time="2025-08-13T00:18:15.578472165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:18:15.581261 containerd[1592]: time="2025-08-13T00:18:15.580985938Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:15.590184 containerd[1592]: time="2025-08-13T00:18:15.589986431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:15.595391 containerd[1592]: time="2025-08-13T00:18:15.594865332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 5.196792638s" Aug 13 00:18:15.595391 containerd[1592]: time="2025-08-13T00:18:15.595076940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:18:15.603249 containerd[1592]: time="2025-08-13T00:18:15.602759184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:18:15.607723 containerd[1592]: time="2025-08-13T00:18:15.607265551Z" level=info msg="CreateContainer within sandbox \"63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:18:15.670867 containerd[1592]: time="2025-08-13T00:18:15.670654016Z" level=info msg="CreateContainer within sandbox \"63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b9835987dbfa555a764b37de0f564fafe53c706cf62742b76d88e237225d5c66\"" Aug 13 00:18:15.683135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380429963.mount: Deactivated successfully. Aug 13 00:18:15.691277 containerd[1592]: time="2025-08-13T00:18:15.688756046Z" level=info msg="StartContainer for \"b9835987dbfa555a764b37de0f564fafe53c706cf62742b76d88e237225d5c66\"" Aug 13 00:18:16.429821 containerd[1592]: time="2025-08-13T00:18:16.429566560Z" level=info msg="StartContainer for \"b9835987dbfa555a764b37de0f564fafe53c706cf62742b76d88e237225d5c66\" returns successfully" Aug 13 00:18:17.788089 kubelet[2741]: I0813 00:18:17.785949 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:18:20.112510 kubelet[2741]: I0813 00:18:20.110985 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:18:20.949909 containerd[1592]: time="2025-08-13T00:18:20.949775484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:20.954045 containerd[1592]: time="2025-08-13T00:18:20.953863840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:18:20.957433 containerd[1592]: time="2025-08-13T00:18:20.957351733Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:20.971989 containerd[1592]: time="2025-08-13T00:18:20.971618638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:20.979228 containerd[1592]: time="2025-08-13T00:18:20.977714270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 5.374710917s" Aug 13 00:18:20.979228 containerd[1592]: time="2025-08-13T00:18:20.977799754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:18:20.985651 containerd[1592]: time="2025-08-13T00:18:20.985584531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:18:21.080588 containerd[1592]: time="2025-08-13T00:18:21.078698821Z" level=info msg="CreateContainer within sandbox \"d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:18:21.127419 containerd[1592]: time="2025-08-13T00:18:21.125494977Z" level=info msg="CreateContainer within sandbox \"d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1b2c9ca85f104fa79d48fbea3aa7ff483968f9aa677492f770970dbd1b634e57\"" Aug 13 00:18:21.135406 containerd[1592]: time="2025-08-13T00:18:21.131538569Z" level=info msg="StartContainer for \"1b2c9ca85f104fa79d48fbea3aa7ff483968f9aa677492f770970dbd1b634e57\"" Aug 13 00:18:21.164553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183045045.mount: Deactivated successfully. Aug 13 00:18:21.603538 containerd[1592]: time="2025-08-13T00:18:21.602458921Z" level=info msg="StartContainer for \"1b2c9ca85f104fa79d48fbea3aa7ff483968f9aa677492f770970dbd1b634e57\" returns successfully" Aug 13 00:18:21.878330 kubelet[2741]: I0813 00:18:21.876782 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79d645b7db-qg8sr" podStartSLOduration=53.321423916 podStartE2EDuration="1m8.876750407s" podCreationTimestamp="2025-08-13 00:17:13 +0000 UTC" firstStartedPulling="2025-08-13 00:18:00.04438258 +0000 UTC m=+71.312107184" lastFinishedPulling="2025-08-13 00:18:15.599709071 +0000 UTC m=+86.867433675" observedRunningTime="2025-08-13 00:18:16.815412412 +0000 UTC m=+88.083137016" watchObservedRunningTime="2025-08-13 00:18:21.876750407 +0000 UTC m=+93.144475011" Aug 13 00:18:23.264704 containerd[1592]: time="2025-08-13T00:18:23.264614339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:23.267748 containerd[1592]: time="2025-08-13T00:18:23.267584574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:18:23.274239 containerd[1592]: time="2025-08-13T00:18:23.272740974Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:23.283883 containerd[1592]: time="2025-08-13T00:18:23.283809644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:23.292871 containerd[1592]: time="2025-08-13T00:18:23.292774311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.307109937s" Aug 13 00:18:23.292871 containerd[1592]: time="2025-08-13T00:18:23.292857314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:18:23.301449 containerd[1592]: time="2025-08-13T00:18:23.301381845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:18:23.304670 containerd[1592]: time="2025-08-13T00:18:23.304591049Z" level=info msg="CreateContainer within sandbox \"03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:18:23.366558 containerd[1592]: time="2025-08-13T00:18:23.365441169Z" level=info msg="CreateContainer within sandbox \"03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"92cc83c38229ffbfb3b4913544117447d1400df58de5f15aa81f0fdee0c9a01f\"" Aug 13 00:18:23.373257 containerd[1592]: time="2025-08-13T00:18:23.372949580Z" level=info msg="StartContainer for \"92cc83c38229ffbfb3b4913544117447d1400df58de5f15aa81f0fdee0c9a01f\"" Aug 13 00:18:23.594435 kubelet[2741]: I0813 00:18:23.594087 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-855f47cdff-5778s" podStartSLOduration=34.800452068 podStartE2EDuration="54.594055235s" podCreationTimestamp="2025-08-13 00:17:29 +0000 UTC" firstStartedPulling="2025-08-13 00:18:01.189983008 +0000 UTC m=+72.457707612" lastFinishedPulling="2025-08-13 00:18:20.983586135 +0000 UTC m=+92.251310779" observedRunningTime="2025-08-13 00:18:21.876611641 +0000 UTC m=+93.144336285" watchObservedRunningTime="2025-08-13 00:18:23.594055235 +0000 UTC m=+94.861779839" Aug 13 00:18:23.760281 containerd[1592]: time="2025-08-13T00:18:23.759045953Z" level=info msg="StartContainer for \"92cc83c38229ffbfb3b4913544117447d1400df58de5f15aa81f0fdee0c9a01f\" returns successfully" Aug 13 00:18:23.796252 containerd[1592]: time="2025-08-13T00:18:23.796061429Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:23.797915 containerd[1592]: time="2025-08-13T00:18:23.797823217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:18:23.821389 containerd[1592]: time="2025-08-13T00:18:23.821292527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 518.30714ms" Aug 13 00:18:23.821389 containerd[1592]: time="2025-08-13T00:18:23.821377690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:18:23.825505 containerd[1592]: time="2025-08-13T00:18:23.824302604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:18:23.832860 containerd[1592]: time="2025-08-13T00:18:23.832763172Z" level=info msg="CreateContainer within sandbox \"89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:18:23.917598 containerd[1592]: time="2025-08-13T00:18:23.917502578Z" level=info msg="CreateContainer within sandbox \"89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d20da8b996afc044644f5e08fd8fe1e01d1b48c01d275d7f17f5f6965ed3534f\"" Aug 13 00:18:23.923153 containerd[1592]: time="2025-08-13T00:18:23.922833585Z" level=info msg="StartContainer for \"d20da8b996afc044644f5e08fd8fe1e01d1b48c01d275d7f17f5f6965ed3534f\"" Aug 13 00:18:24.258887 containerd[1592]: time="2025-08-13T00:18:24.258657218Z" level=info msg="StartContainer for \"d20da8b996afc044644f5e08fd8fe1e01d1b48c01d275d7f17f5f6965ed3534f\" returns successfully" Aug 13 00:18:24.955020 kubelet[2741]: I0813 00:18:24.954880 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79d645b7db-rhp59" podStartSLOduration=50.460198825 podStartE2EDuration="1m11.95483835s" podCreationTimestamp="2025-08-13 00:17:13 +0000 UTC" firstStartedPulling="2025-08-13 00:18:02.328719002 +0000 UTC m=+73.596443646" lastFinishedPulling="2025-08-13 00:18:23.823358567 +0000 UTC m=+95.091083171" observedRunningTime="2025-08-13 00:18:24.952500378 +0000 UTC m=+96.220224982" watchObservedRunningTime="2025-08-13 00:18:24.95483835 +0000 UTC m=+96.222563034" Aug 13 00:18:26.199233 containerd[1592]: time="2025-08-13T00:18:26.197374273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:26.204301 containerd[1592]: time="2025-08-13T00:18:26.202166382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:18:26.204486 containerd[1592]: time="2025-08-13T00:18:26.204378669Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:26.230232 containerd[1592]: time="2025-08-13T00:18:26.224576543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:18:26.254231 containerd[1592]: time="2025-08-13T00:18:26.242403085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.417978876s" Aug 13 00:18:26.254231 containerd[1592]: time="2025-08-13T00:18:26.242501528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:18:26.290236 containerd[1592]: time="2025-08-13T00:18:26.286537741Z" level=info msg="CreateContainer within sandbox \"03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:18:26.549924 containerd[1592]: time="2025-08-13T00:18:26.549723134Z" level=info msg="CreateContainer within sandbox \"03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ead03bfbcc025365b487fa1854ca3ac4bafac53eafa36b28fcd9d3fed2af1aa3\"" Aug 13 00:18:26.553946 containerd[1592]: time="2025-08-13T00:18:26.553857297Z" level=info msg="StartContainer for \"ead03bfbcc025365b487fa1854ca3ac4bafac53eafa36b28fcd9d3fed2af1aa3\"" Aug 13 00:18:26.858849 containerd[1592]: time="2025-08-13T00:18:26.858624447Z" level=info msg="StartContainer for \"ead03bfbcc025365b487fa1854ca3ac4bafac53eafa36b28fcd9d3fed2af1aa3\" returns successfully" Aug 13 00:18:26.940319 kubelet[2741]: I0813 00:18:26.939986 2741 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:18:26.984839 kubelet[2741]: I0813 00:18:26.983863 2741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6hw4j" podStartSLOduration=34.767869827 podStartE2EDuration="58.983833532s" podCreationTimestamp="2025-08-13 00:17:28 +0000 UTC" firstStartedPulling="2025-08-13 00:18:02.055514563 +0000 UTC m=+73.323239167" lastFinishedPulling="2025-08-13 00:18:26.271478268 +0000 UTC m=+97.539202872" observedRunningTime="2025-08-13 00:18:26.983589523 +0000 UTC m=+98.251314167" watchObservedRunningTime="2025-08-13 00:18:26.983833532 +0000 UTC m=+98.251558376" Aug 13 00:18:27.191967 kubelet[2741]: I0813 00:18:27.191059 2741 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:18:27.210114 kubelet[2741]: I0813 00:18:27.210063 2741 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:18:49.479419 containerd[1592]: time="2025-08-13T00:18:49.478108034Z" level=info msg="StopPodSandbox for \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\"" Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.653 [WARNING][5601] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6428bddd-411d-486c-a798-4f373ec4640c", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d", Pod:"coredns-7c65d6cfc9-nf8md", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia156e7fbc89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.653 [INFO][5601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.654 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" iface="eth0" netns="" Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.654 [INFO][5601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.654 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.735 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.736 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.737 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.763 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.763 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.767 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:49.778501 containerd[1592]: 2025-08-13 00:18:49.772 [INFO][5601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:18:49.778501 containerd[1592]: time="2025-08-13T00:18:49.776871655Z" level=info msg="TearDown network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\" successfully" Aug 13 00:18:49.778501 containerd[1592]: time="2025-08-13T00:18:49.776921457Z" level=info msg="StopPodSandbox for \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\" returns successfully" Aug 13 00:18:49.782276 containerd[1592]: time="2025-08-13T00:18:49.782009952Z" level=info msg="RemovePodSandbox for \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\"" Aug 13 00:18:49.815827 containerd[1592]: time="2025-08-13T00:18:49.815394962Z" level=info msg="Forcibly stopping sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\"" Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:49.979 [WARNING][5622] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6428bddd-411d-486c-a798-4f373ec4640c", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"cf0f676854ae3ac7475d28105b65609f8f1095437d63b4aebb40399f4b42156d", Pod:"coredns-7c65d6cfc9-nf8md", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia156e7fbc89", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:49.980 [INFO][5622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:49.980 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" iface="eth0" netns="" Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:49.980 [INFO][5622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:49.980 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:50.136 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:50.141 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:50.141 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:50.209 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:50.209 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" HandleID="k8s-pod-network.f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--nf8md-eth0" Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:50.214 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:50.230235 containerd[1592]: 2025-08-13 00:18:50.220 [INFO][5622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f" Aug 13 00:18:50.235262 containerd[1592]: time="2025-08-13T00:18:50.231603043Z" level=info msg="TearDown network for sandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\" successfully" Aug 13 00:18:50.246235 containerd[1592]: time="2025-08-13T00:18:50.245822365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:18:50.247266 containerd[1592]: time="2025-08-13T00:18:50.247174662Z" level=info msg="RemovePodSandbox \"f17dd6860306b2ed79077f775f9cfb0b0063c8febd7093560e1e829a287c610f\" returns successfully" Aug 13 00:18:50.249568 containerd[1592]: time="2025-08-13T00:18:50.249515322Z" level=info msg="StopPodSandbox for \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\"" Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.395 [WARNING][5644] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"211992a4-e03b-407a-b6e8-049cb37a8c67", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d", Pod:"goldmane-58fd7646b9-75vg2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5cff7db1a58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.396 [INFO][5644] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.396 [INFO][5644] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" iface="eth0" netns="" Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.396 [INFO][5644] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.396 [INFO][5644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.470 [INFO][5651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.471 [INFO][5651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.471 [INFO][5651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.509 [WARNING][5651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.509 [INFO][5651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.513 [INFO][5651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:50.528346 containerd[1592]: 2025-08-13 00:18:50.521 [INFO][5644] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:18:50.528346 containerd[1592]: time="2025-08-13T00:18:50.528001309Z" level=info msg="TearDown network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\" successfully" Aug 13 00:18:50.528346 containerd[1592]: time="2025-08-13T00:18:50.528063112Z" level=info msg="StopPodSandbox for \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\" returns successfully" Aug 13 00:18:50.534407 containerd[1592]: time="2025-08-13T00:18:50.533551024Z" level=info msg="RemovePodSandbox for \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\"" Aug 13 00:18:50.534407 containerd[1592]: time="2025-08-13T00:18:50.533610227Z" level=info msg="Forcibly stopping sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\"" Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.639 [WARNING][5666] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"211992a4-e03b-407a-b6e8-049cb37a8c67", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"412d092571cbd853a3934939e659da56de1368bc88ec4c16a5ae64a2761dd97d", Pod:"goldmane-58fd7646b9-75vg2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5cff7db1a58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.641 [INFO][5666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.641 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" iface="eth0" netns="" Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.641 [INFO][5666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.641 [INFO][5666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.702 [INFO][5673] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.703 [INFO][5673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.703 [INFO][5673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.727 [WARNING][5673] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.728 [INFO][5673] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" HandleID="k8s-pod-network.88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Workload="ci--4081--3--5--0--684996fd0b-k8s-goldmane--58fd7646b9--75vg2-eth0" Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.733 [INFO][5673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:50.743270 containerd[1592]: 2025-08-13 00:18:50.737 [INFO][5666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6" Aug 13 00:18:50.743270 containerd[1592]: time="2025-08-13T00:18:50.741960366Z" level=info msg="TearDown network for sandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\" successfully" Aug 13 00:18:50.754777 containerd[1592]: time="2025-08-13T00:18:50.754611261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:18:50.755061 containerd[1592]: time="2025-08-13T00:18:50.754834350Z" level=info msg="RemovePodSandbox \"88c2c0ccd939b1a41bd48596eb855bfa8856f0006528c9f80bab6e19fc64c5b6\" returns successfully" Aug 13 00:18:50.756773 containerd[1592]: time="2025-08-13T00:18:50.755960398Z" level=info msg="StopPodSandbox for \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\"" Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.881 [WARNING][5687] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0", GenerateName:"calico-apiserver-79d645b7db-", Namespace:"calico-apiserver", SelfLink:"", UID:"960cd058-0fa0-479e-b7da-fdbdd01280da", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d645b7db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f", Pod:"calico-apiserver-79d645b7db-rhp59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid402a4b462e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.882 [INFO][5687] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.883 [INFO][5687] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" iface="eth0" netns="" Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.884 [INFO][5687] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.884 [INFO][5687] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.963 [INFO][5694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.964 [INFO][5694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.964 [INFO][5694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.989 [WARNING][5694] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.989 [INFO][5694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:50.997 [INFO][5694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:51.009263 containerd[1592]: 2025-08-13 00:18:51.002 [INFO][5687] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:18:51.009263 containerd[1592]: time="2025-08-13T00:18:51.008670255Z" level=info msg="TearDown network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\" successfully" Aug 13 00:18:51.009263 containerd[1592]: time="2025-08-13T00:18:51.008723218Z" level=info msg="StopPodSandbox for \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\" returns successfully" Aug 13 00:18:51.011379 containerd[1592]: time="2025-08-13T00:18:51.009771622Z" level=info msg="RemovePodSandbox for \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\"" Aug 13 00:18:51.011379 containerd[1592]: time="2025-08-13T00:18:51.009827785Z" level=info msg="Forcibly stopping sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\"" Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.117 [WARNING][5709] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0", GenerateName:"calico-apiserver-79d645b7db-", Namespace:"calico-apiserver", SelfLink:"", UID:"960cd058-0fa0-479e-b7da-fdbdd01280da", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d645b7db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"89450be7c16709f77b57bebbec1e676062c7a052792e86d53534f9bd7cf1762f", Pod:"calico-apiserver-79d645b7db-rhp59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid402a4b462e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.118 [INFO][5709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.120 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" iface="eth0" netns="" Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.122 [INFO][5709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.122 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.198 [INFO][5717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.198 [INFO][5717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.198 [INFO][5717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.227 [WARNING][5717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.227 [INFO][5717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" HandleID="k8s-pod-network.6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--rhp59-eth0" Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.231 [INFO][5717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:51.239630 containerd[1592]: 2025-08-13 00:18:51.235 [INFO][5709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265" Aug 13 00:18:51.242096 containerd[1592]: time="2025-08-13T00:18:51.239693813Z" level=info msg="TearDown network for sandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\" successfully" Aug 13 00:18:51.249970 containerd[1592]: time="2025-08-13T00:18:51.249490908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:18:51.249970 containerd[1592]: time="2025-08-13T00:18:51.249647755Z" level=info msg="RemovePodSandbox \"6dd4704411c78360f5a9740f6420ef0e02074321db8fe3803a897e3c5b79e265\" returns successfully" Aug 13 00:18:51.254765 containerd[1592]: time="2025-08-13T00:18:51.254327914Z" level=info msg="StopPodSandbox for \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\"" Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.363 [WARNING][5732] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2fa714a0-45d3-4601-af8c-8a6aebd91ca9", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71", Pod:"coredns-7c65d6cfc9-gzcsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34517fbf04f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.366 [INFO][5732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.366 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" iface="eth0" netns="" Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.366 [INFO][5732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.366 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.511 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.516 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.516 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.555 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.557 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.577 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:51.591081 containerd[1592]: 2025-08-13 00:18:51.584 [INFO][5732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:18:51.596010 containerd[1592]: time="2025-08-13T00:18:51.591350686Z" level=info msg="TearDown network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\" successfully" Aug 13 00:18:51.596010 containerd[1592]: time="2025-08-13T00:18:51.593879034Z" level=info msg="StopPodSandbox for \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\" returns successfully" Aug 13 00:18:51.600530 containerd[1592]: time="2025-08-13T00:18:51.599811685Z" level=info msg="RemovePodSandbox for \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\"" Aug 13 00:18:51.600530 containerd[1592]: time="2025-08-13T00:18:51.599883528Z" level=info msg="Forcibly stopping sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\"" Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.842 [WARNING][5754] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2fa714a0-45d3-4601-af8c-8a6aebd91ca9", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"a77f7c3f4886c88beffc53f05a081cd1b57914e65ef11b61857eaf7a0bc5af71", Pod:"coredns-7c65d6cfc9-gzcsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34517fbf04f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.842 [INFO][5754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.842 [INFO][5754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" iface="eth0" netns="" Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.843 [INFO][5754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.843 [INFO][5754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.942 [INFO][5762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.943 [INFO][5762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.943 [INFO][5762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.970 [WARNING][5762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.970 [INFO][5762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" HandleID="k8s-pod-network.e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Workload="ci--4081--3--5--0--684996fd0b-k8s-coredns--7c65d6cfc9--gzcsq-eth0" Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.975 [INFO][5762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:51.994605 containerd[1592]: 2025-08-13 00:18:51.985 [INFO][5754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976" Aug 13 00:18:52.000899 containerd[1592]: time="2025-08-13T00:18:51.998385668Z" level=info msg="TearDown network for sandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\" successfully" Aug 13 00:18:52.016342 containerd[1592]: time="2025-08-13T00:18:52.015967535Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:18:52.016342 containerd[1592]: time="2025-08-13T00:18:52.016158903Z" level=info msg="RemovePodSandbox \"e9f5359ea8c22c933f9b37aa86f0ef5a1429e40d347045692ce4eed2cb848976\" returns successfully" Aug 13 00:18:52.022231 containerd[1592]: time="2025-08-13T00:18:52.019717175Z" level=info msg="StopPodSandbox for \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\"" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.130 [WARNING][5777] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.131 [INFO][5777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.131 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" iface="eth0" netns="" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.131 [INFO][5777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.131 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.244 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.245 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.245 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.272 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.273 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.276 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:52.295616 containerd[1592]: 2025-08-13 00:18:52.284 [INFO][5777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:18:52.301042 containerd[1592]: time="2025-08-13T00:18:52.298503700Z" level=info msg="TearDown network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\" successfully" Aug 13 00:18:52.301042 containerd[1592]: time="2025-08-13T00:18:52.298600464Z" level=info msg="StopPodSandbox for \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\" returns successfully" Aug 13 00:18:52.304718 containerd[1592]: time="2025-08-13T00:18:52.302866445Z" level=info msg="RemovePodSandbox for \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\"" Aug 13 00:18:52.304718 containerd[1592]: time="2025-08-13T00:18:52.302931928Z" level=info msg="Forcibly stopping sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\"" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.407 [WARNING][5798] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" WorkloadEndpoint="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.407 [INFO][5798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.407 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" iface="eth0" netns="" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.407 [INFO][5798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.407 [INFO][5798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.467 [INFO][5806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.467 [INFO][5806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.467 [INFO][5806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.511 [WARNING][5806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.512 [INFO][5806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" HandleID="k8s-pod-network.89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Workload="ci--4081--3--5--0--684996fd0b-k8s-whisker--6cc5bfd4c--rq45m-eth0" Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.516 [INFO][5806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:52.531488 containerd[1592]: 2025-08-13 00:18:52.521 [INFO][5798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f" Aug 13 00:18:52.531488 containerd[1592]: time="2025-08-13T00:18:52.530832411Z" level=info msg="TearDown network for sandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\" successfully" Aug 13 00:18:52.579673 containerd[1592]: time="2025-08-13T00:18:52.579022179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:18:52.579673 containerd[1592]: time="2025-08-13T00:18:52.579148984Z" level=info msg="RemovePodSandbox \"89bbf7cc46b906a7ecc8682874727be1412f94c7f9da31cf2835cfc17ac02c2f\" returns successfully" Aug 13 00:18:52.585440 containerd[1592]: time="2025-08-13T00:18:52.583864704Z" level=info msg="StopPodSandbox for \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\"" Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.718 [WARNING][5820] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0", GenerateName:"calico-apiserver-79d645b7db-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f19c543-3502-4177-93bc-f402734db516", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d645b7db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d", Pod:"calico-apiserver-79d645b7db-qg8sr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali47afaa4e66f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.721 [INFO][5820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.721 [INFO][5820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" iface="eth0" netns="" Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.721 [INFO][5820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.721 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.781 [INFO][5827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.782 [INFO][5827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.782 [INFO][5827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.806 [WARNING][5827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.806 [INFO][5827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.811 [INFO][5827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:52.820819 containerd[1592]: 2025-08-13 00:18:52.815 [INFO][5820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:18:52.826251 containerd[1592]: time="2025-08-13T00:18:52.820854254Z" level=info msg="TearDown network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\" successfully" Aug 13 00:18:52.826251 containerd[1592]: time="2025-08-13T00:18:52.820899456Z" level=info msg="StopPodSandbox for \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\" returns successfully" Aug 13 00:18:52.826251 containerd[1592]: time="2025-08-13T00:18:52.823651972Z" level=info msg="RemovePodSandbox for \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\"" Aug 13 00:18:52.826251 containerd[1592]: time="2025-08-13T00:18:52.823714895Z" level=info msg="Forcibly stopping sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\"" Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.023 [WARNING][5841] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0", GenerateName:"calico-apiserver-79d645b7db-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f19c543-3502-4177-93bc-f402734db516", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d645b7db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"63bfae2df2a93ab0c9e2105c4ca73f1b9f7ea5436ea9edc691a87390c412827d", Pod:"calico-apiserver-79d645b7db-qg8sr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali47afaa4e66f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.024 [INFO][5841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.024 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" iface="eth0" netns="" Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.024 [INFO][5841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.024 [INFO][5841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.138 [INFO][5848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.138 [INFO][5848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.138 [INFO][5848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.174 [WARNING][5848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.174 [INFO][5848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" HandleID="k8s-pod-network.ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--apiserver--79d645b7db--qg8sr-eth0" Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.179 [INFO][5848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:53.216268 containerd[1592]: 2025-08-13 00:18:53.192 [INFO][5841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6" Aug 13 00:18:53.216268 containerd[1592]: time="2025-08-13T00:18:53.213487152Z" level=info msg="TearDown network for sandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\" successfully" Aug 13 00:18:53.245467 containerd[1592]: time="2025-08-13T00:18:53.245021735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:18:53.245687 containerd[1592]: time="2025-08-13T00:18:53.245527396Z" level=info msg="RemovePodSandbox \"ef07b17426e63460d98d0ed7f977dcffbf2557a183042fed05f46fdc64be75a6\" returns successfully" Aug 13 00:18:53.248849 containerd[1592]: time="2025-08-13T00:18:53.248116346Z" level=info msg="StopPodSandbox for \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\"" Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.438 [WARNING][5862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"851fc6af-b9af-4d67-92e5-4dcf6cbec03a", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2", Pod:"csi-node-driver-6hw4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd607f504f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.439 [INFO][5862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.439 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" iface="eth0" netns="" Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.439 [INFO][5862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.439 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.495 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.496 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.496 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.525 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.525 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.529 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:53.542923 containerd[1592]: 2025-08-13 00:18:53.537 [INFO][5862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:18:53.547436 containerd[1592]: time="2025-08-13T00:18:53.543121503Z" level=info msg="TearDown network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\" successfully" Aug 13 00:18:53.547436 containerd[1592]: time="2025-08-13T00:18:53.543326232Z" level=info msg="StopPodSandbox for \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\" returns successfully" Aug 13 00:18:53.547436 containerd[1592]: time="2025-08-13T00:18:53.545502084Z" level=info msg="RemovePodSandbox for \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\"" Aug 13 00:18:53.547436 containerd[1592]: time="2025-08-13T00:18:53.545635810Z" level=info msg="Forcibly stopping sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\"" Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.693 [WARNING][5884] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"851fc6af-b9af-4d67-92e5-4dcf6cbec03a", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"03c33aac33c99bd6c331e3af818cb0ec62b64658c90201285a64b45194e194f2", Pod:"csi-node-driver-6hw4j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd607f504f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.694 [INFO][5884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.694 [INFO][5884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" iface="eth0" netns="" Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.694 [INFO][5884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.694 [INFO][5884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.781 [INFO][5891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.782 [INFO][5891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.782 [INFO][5891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.814 [WARNING][5891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.816 [INFO][5891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" HandleID="k8s-pod-network.c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Workload="ci--4081--3--5--0--684996fd0b-k8s-csi--node--driver--6hw4j-eth0" Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.826 [INFO][5891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:53.844423 containerd[1592]: 2025-08-13 00:18:53.832 [INFO][5884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c" Aug 13 00:18:53.844423 containerd[1592]: time="2025-08-13T00:18:53.840494561Z" level=info msg="TearDown network for sandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\" successfully" Aug 13 00:18:53.857185 containerd[1592]: time="2025-08-13T00:18:53.856342515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:18:53.857185 containerd[1592]: time="2025-08-13T00:18:53.856484121Z" level=info msg="RemovePodSandbox \"c4bb794ea147501f2c93b9fb67287f818930aa151b6b05d61970aff0077db55c\" returns successfully" Aug 13 00:18:53.860292 containerd[1592]: time="2025-08-13T00:18:53.859538171Z" level=info msg="StopPodSandbox for \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\"" Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.072 [WARNING][5906] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0", GenerateName:"calico-kube-controllers-855f47cdff-", Namespace:"calico-system", SelfLink:"", UID:"660c42fe-0b74-4f87-a47c-3e9a64771e8c", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855f47cdff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0", Pod:"calico-kube-controllers-855f47cdff-5778s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f12c3e2a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.073 [INFO][5906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.073 [INFO][5906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" iface="eth0" netns="" Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.074 [INFO][5906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.074 [INFO][5906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.149 [INFO][5913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.150 [INFO][5913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.150 [INFO][5913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.186 [WARNING][5913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.186 [INFO][5913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.193 [INFO][5913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:54.217979 containerd[1592]: 2025-08-13 00:18:54.205 [INFO][5906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:18:54.221619 containerd[1592]: time="2025-08-13T00:18:54.219382745Z" level=info msg="TearDown network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\" successfully" Aug 13 00:18:54.221619 containerd[1592]: time="2025-08-13T00:18:54.219492669Z" level=info msg="StopPodSandbox for \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\" returns successfully" Aug 13 00:18:54.221619 containerd[1592]: time="2025-08-13T00:18:54.220622157Z" level=info msg="RemovePodSandbox for \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\"" Aug 13 00:18:54.221619 containerd[1592]: time="2025-08-13T00:18:54.220681200Z" level=info msg="Forcibly stopping sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\"" Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.363 [WARNING][5928] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0", GenerateName:"calico-kube-controllers-855f47cdff-", Namespace:"calico-system", SelfLink:"", UID:"660c42fe-0b74-4f87-a47c-3e9a64771e8c", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855f47cdff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-0-684996fd0b", ContainerID:"d53a3e48ec7a16d8788064311f153df75bc338221e3ff59a2f13b2d82920f0c0", Pod:"calico-kube-controllers-855f47cdff-5778s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f12c3e2a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.365 [INFO][5928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.368 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" iface="eth0" netns="" Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.369 [INFO][5928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.371 [INFO][5928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.453 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.454 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.454 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.483 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.483 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" HandleID="k8s-pod-network.71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Workload="ci--4081--3--5--0--684996fd0b-k8s-calico--kube--controllers--855f47cdff--5778s-eth0" Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.493 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:18:54.510298 containerd[1592]: 2025-08-13 00:18:54.500 [INFO][5928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480" Aug 13 00:18:54.510298 containerd[1592]: time="2025-08-13T00:18:54.508458911Z" level=info msg="TearDown network for sandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\" successfully" Aug 13 00:18:54.524297 containerd[1592]: time="2025-08-13T00:18:54.523483751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:18:54.524297 containerd[1592]: time="2025-08-13T00:18:54.523630157Z" level=info msg="RemovePodSandbox \"71ec6d13e92ea75b05b15b95d73942fe5b4044e2e4fd456d3a269ad407720480\" returns successfully" Aug 13 00:18:56.388858 systemd[1]: Started sshd@7-138.201.175.117:22-45.164.98.205:55286.service - OpenSSH per-connection server daemon (45.164.98.205:55286). Aug 13 00:18:58.502597 sshd[5943]: Invalid user cusadmin from 45.164.98.205 port 55286 Aug 13 00:18:58.996037 sshd[5945]: pam_faillock(sshd:auth): User unknown Aug 13 00:18:58.998106 sshd[5943]: Postponed keyboard-interactive for invalid user cusadmin from 45.164.98.205 port 55286 ssh2 [preauth] Aug 13 00:18:59.517093 systemd[1]: run-containerd-runc-k8s.io-77c9a39643da9bf33856483de4796e772446ef9964b475fc8ed334de02b815d5-runc.GMfBpa.mount: Deactivated successfully. Aug 13 00:18:59.528044 sshd[5945]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:18:59.528098 sshd[5945]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.164.98.205 Aug 13 00:18:59.534500 sshd[5945]: pam_faillock(sshd:auth): User unknown Aug 13 00:19:02.112384 sshd[5943]: PAM: Permission denied for illegal user cusadmin from 45.164.98.205 Aug 13 00:19:02.113531 sshd[5943]: Failed keyboard-interactive/pam for invalid user cusadmin from 45.164.98.205 port 55286 ssh2 Aug 13 00:19:02.650014 sshd[5943]: Connection closed by invalid user cusadmin 45.164.98.205 port 55286 [preauth] Aug 13 00:19:02.659481 systemd[1]: sshd@7-138.201.175.117:22-45.164.98.205:55286.service: Deactivated successfully. Aug 13 00:19:12.080787 update_engine[1578]: I20250813 00:19:12.078233 1578 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:19:12.080787 update_engine[1578]: I20250813 00:19:12.078318 1578 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:19:12.080787 update_engine[1578]: I20250813 00:19:12.078823 1578 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:19:12.087343 update_engine[1578]: I20250813 00:19:12.084527 1578 omaha_request_params.cc:62] Current group set to lts Aug 13 00:19:12.087343 update_engine[1578]: I20250813 00:19:12.084728 1578 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:19:12.087343 update_engine[1578]: I20250813 00:19:12.084749 1578 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:19:12.087343 update_engine[1578]: I20250813 00:19:12.084782 1578 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:19:12.099828 update_engine[1578]: I20250813 00:19:12.096657 1578 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:19:12.099828 update_engine[1578]: I20250813 00:19:12.096836 1578 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:19:12.099828 update_engine[1578]: I20250813 00:19:12.096858 1578 omaha_request_action.cc:272] Request: Aug 13 00:19:12.099828 update_engine[1578]: Aug 13 00:19:12.099828 update_engine[1578]: Aug 13 00:19:12.099828 update_engine[1578]: Aug 13 00:19:12.099828 update_engine[1578]: Aug 13 00:19:12.099828 update_engine[1578]: Aug 13 00:19:12.099828 update_engine[1578]: Aug 13 00:19:12.099828 update_engine[1578]: Aug 13 00:19:12.099828 update_engine[1578]: Aug 13 00:19:12.099828 update_engine[1578]: I20250813 00:19:12.096873 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:19:12.116402 update_engine[1578]: I20250813 00:19:12.112636 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:19:12.116402 update_engine[1578]: I20250813 00:19:12.116288 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:19:12.118265 update_engine[1578]: E20250813 00:19:12.117293 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:19:12.118265 update_engine[1578]: I20250813 00:19:12.117430 1578 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:19:12.202823 locksmithd[1615]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:19:21.988302 update_engine[1578]: I20250813 00:19:21.987406 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:19:21.988302 update_engine[1578]: I20250813 00:19:21.987807 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:19:21.988302 update_engine[1578]: I20250813 00:19:21.988147 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:19:21.991937 update_engine[1578]: E20250813 00:19:21.991873 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:19:21.992395 update_engine[1578]: I20250813 00:19:21.992321 1578 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 00:19:31.988533 update_engine[1578]: I20250813 00:19:31.988410 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:19:31.989366 update_engine[1578]: I20250813 00:19:31.988866 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:19:31.989366 update_engine[1578]: I20250813 00:19:31.989289 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:19:31.990689 update_engine[1578]: E20250813 00:19:31.990372 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:19:31.990689 update_engine[1578]: I20250813 00:19:31.990535 1578 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 00:19:35.348955 systemd[1]: Started sshd@8-138.201.175.117:22-139.178.89.65:57554.service - OpenSSH per-connection server daemon (139.178.89.65:57554). Aug 13 00:19:36.405023 sshd[6114]: Accepted publickey for core from 139.178.89.65 port 57554 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:19:36.413433 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:36.434686 systemd-logind[1570]: New session 8 of user core. Aug 13 00:19:36.444913 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:19:37.395946 sshd[6114]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:37.409029 systemd[1]: sshd@8-138.201.175.117:22-139.178.89.65:57554.service: Deactivated successfully. Aug 13 00:19:37.423615 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:19:37.424917 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:19:37.432797 systemd-logind[1570]: Removed session 8. Aug 13 00:19:41.982822 update_engine[1578]: I20250813 00:19:41.982677 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:19:41.983774 update_engine[1578]: I20250813 00:19:41.983285 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:19:41.983774 update_engine[1578]: I20250813 00:19:41.983670 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:19:41.985673 update_engine[1578]: E20250813 00:19:41.985572 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:19:41.985832 update_engine[1578]: I20250813 00:19:41.985714 1578 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:19:41.985832 update_engine[1578]: I20250813 00:19:41.985739 1578 omaha_request_action.cc:617] Omaha request response: Aug 13 00:19:41.985962 update_engine[1578]: E20250813 00:19:41.985877 1578 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 00:19:41.985962 update_engine[1578]: I20250813 00:19:41.985911 1578 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 00:19:41.985962 update_engine[1578]: I20250813 00:19:41.985926 1578 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:19:41.985962 update_engine[1578]: I20250813 00:19:41.985939 1578 update_attempter.cc:306] Processing Done. Aug 13 00:19:41.986342 update_engine[1578]: E20250813 00:19:41.985965 1578 update_attempter.cc:619] Update failed. Aug 13 00:19:41.986342 update_engine[1578]: I20250813 00:19:41.985980 1578 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 00:19:41.986342 update_engine[1578]: I20250813 00:19:41.985996 1578 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 00:19:41.986342 update_engine[1578]: I20250813 00:19:41.986011 1578 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 00:19:41.986342 update_engine[1578]: I20250813 00:19:41.986253 1578 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:19:41.986342 update_engine[1578]: I20250813 00:19:41.986306 1578 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:19:41.986342 update_engine[1578]: I20250813 00:19:41.986321 1578 omaha_request_action.cc:272] Request: Aug 13 00:19:41.986342 update_engine[1578]: Aug 13 00:19:41.986342 update_engine[1578]: Aug 13 00:19:41.986342 update_engine[1578]: Aug 13 00:19:41.986342 update_engine[1578]: Aug 13 00:19:41.986342 update_engine[1578]: Aug 13 00:19:41.986342 update_engine[1578]: Aug 13 00:19:41.987221 update_engine[1578]: I20250813 00:19:41.986358 1578 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:19:41.987221 update_engine[1578]: I20250813 00:19:41.986665 1578 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:19:41.987221 update_engine[1578]: I20250813 00:19:41.987000 1578 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:19:41.987729 locksmithd[1615]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 00:19:41.988508 update_engine[1578]: E20250813 00:19:41.988282 1578 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:19:41.988508 update_engine[1578]: I20250813 00:19:41.988384 1578 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:19:41.988508 update_engine[1578]: I20250813 00:19:41.988401 1578 omaha_request_action.cc:617] Omaha request response: Aug 13 00:19:41.988508 update_engine[1578]: I20250813 00:19:41.988416 1578 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:19:41.988508 update_engine[1578]: I20250813 00:19:41.988430 1578 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:19:41.988508 update_engine[1578]: I20250813 00:19:41.988462 1578 update_attempter.cc:306] Processing Done. Aug 13 00:19:41.988508 update_engine[1578]: I20250813 00:19:41.988482 1578 update_attempter.cc:310] Error event sent. Aug 13 00:19:41.988941 update_engine[1578]: I20250813 00:19:41.988502 1578 update_check_scheduler.cc:74] Next update check in 42m4s Aug 13 00:19:41.990300 locksmithd[1615]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 00:19:42.574713 systemd[1]: Started sshd@9-138.201.175.117:22-139.178.89.65:38364.service - OpenSSH per-connection server daemon (139.178.89.65:38364). Aug 13 00:19:43.623633 sshd[6175]: Accepted publickey for core from 139.178.89.65 port 38364 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:19:43.630465 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:43.641547 systemd-logind[1570]: New session 9 of user core. Aug 13 00:19:43.652430 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:19:44.650442 sshd[6175]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:44.667327 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:19:44.670921 systemd[1]: sshd@9-138.201.175.117:22-139.178.89.65:38364.service: Deactivated successfully. Aug 13 00:19:44.693351 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:19:44.707085 systemd-logind[1570]: Removed session 9. Aug 13 00:19:49.832808 systemd[1]: Started sshd@10-138.201.175.117:22-139.178.89.65:38492.service - OpenSSH per-connection server daemon (139.178.89.65:38492). Aug 13 00:19:50.895235 sshd[6192]: Accepted publickey for core from 139.178.89.65 port 38492 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:19:50.902874 sshd[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:50.931855 systemd-logind[1570]: New session 10 of user core. Aug 13 00:19:50.941938 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:19:51.887039 sshd[6192]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:51.905257 systemd[1]: sshd@10-138.201.175.117:22-139.178.89.65:38492.service: Deactivated successfully. Aug 13 00:19:51.907575 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:19:51.924291 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:19:51.931036 systemd-logind[1570]: Removed session 10. Aug 13 00:19:57.059787 systemd[1]: Started sshd@11-138.201.175.117:22-139.178.89.65:38506.service - OpenSSH per-connection server daemon (139.178.89.65:38506). Aug 13 00:19:58.091146 sshd[6210]: Accepted publickey for core from 139.178.89.65 port 38506 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:19:58.094586 sshd[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:58.104873 systemd-logind[1570]: New session 11 of user core. Aug 13 00:19:58.112862 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:19:58.969707 sshd[6210]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:58.979512 systemd[1]: sshd@11-138.201.175.117:22-139.178.89.65:38506.service: Deactivated successfully. Aug 13 00:19:58.988990 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:19:58.989136 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:19:58.992771 systemd-logind[1570]: Removed session 11. Aug 13 00:20:04.159780 systemd[1]: Started sshd@12-138.201.175.117:22-139.178.89.65:38766.service - OpenSSH per-connection server daemon (139.178.89.65:38766). Aug 13 00:20:05.260681 sshd[6249]: Accepted publickey for core from 139.178.89.65 port 38766 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:20:05.265811 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:05.283278 systemd-logind[1570]: New session 12 of user core. Aug 13 00:20:05.296115 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:20:06.178804 sshd[6249]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:06.189035 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:20:06.192382 systemd[1]: sshd@12-138.201.175.117:22-139.178.89.65:38766.service: Deactivated successfully. Aug 13 00:20:06.198382 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:20:06.201309 systemd-logind[1570]: Removed session 12. Aug 13 00:20:11.365367 systemd[1]: Started sshd@13-138.201.175.117:22-139.178.89.65:56296.service - OpenSSH per-connection server daemon (139.178.89.65:56296). Aug 13 00:20:12.459751 sshd[6325]: Accepted publickey for core from 139.178.89.65 port 56296 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:20:12.464108 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:12.480586 systemd-logind[1570]: New session 13 of user core. Aug 13 00:20:12.487626 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:20:13.414228 sshd[6325]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:13.429808 systemd[1]: sshd@13-138.201.175.117:22-139.178.89.65:56296.service: Deactivated successfully. Aug 13 00:20:13.445850 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:20:13.448957 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:20:13.451389 systemd-logind[1570]: Removed session 13. Aug 13 00:20:18.601736 systemd[1]: Started sshd@14-138.201.175.117:22-139.178.89.65:56298.service - OpenSSH per-connection server daemon (139.178.89.65:56298). Aug 13 00:20:19.702414 sshd[6340]: Accepted publickey for core from 139.178.89.65 port 56298 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:20:19.707509 sshd[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:19.734331 systemd-logind[1570]: New session 14 of user core. Aug 13 00:20:19.740934 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:20:20.701600 sshd[6340]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:20.714010 systemd[1]: sshd@14-138.201.175.117:22-139.178.89.65:56298.service: Deactivated successfully. Aug 13 00:20:20.731042 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:20:20.733883 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:20:20.739628 systemd-logind[1570]: Removed session 14. Aug 13 00:20:21.852268 systemd[1]: run-containerd-runc-k8s.io-1b2c9ca85f104fa79d48fbea3aa7ff483968f9aa677492f770970dbd1b634e57-runc.N1nLMw.mount: Deactivated successfully. Aug 13 00:20:25.869122 systemd[1]: Started sshd@15-138.201.175.117:22-139.178.89.65:36124.service - OpenSSH per-connection server daemon (139.178.89.65:36124). Aug 13 00:20:26.911382 sshd[6374]: Accepted publickey for core from 139.178.89.65 port 36124 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:20:26.916382 sshd[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:26.930253 systemd-logind[1570]: New session 15 of user core. Aug 13 00:20:26.937968 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:20:27.829001 sshd[6374]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:27.843264 systemd[1]: sshd@15-138.201.175.117:22-139.178.89.65:36124.service: Deactivated successfully. Aug 13 00:20:27.858894 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:20:27.869420 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:20:27.872647 systemd-logind[1570]: Removed session 15. Aug 13 00:20:33.006934 systemd[1]: Started sshd@16-138.201.175.117:22-139.178.89.65:39882.service - OpenSSH per-connection server daemon (139.178.89.65:39882). Aug 13 00:20:34.036477 sshd[6416]: Accepted publickey for core from 139.178.89.65 port 39882 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:20:34.041876 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:34.064851 systemd-logind[1570]: New session 16 of user core. Aug 13 00:20:34.069899 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:20:34.948582 sshd[6416]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:34.956552 systemd[1]: sshd@16-138.201.175.117:22-139.178.89.65:39882.service: Deactivated successfully. Aug 13 00:20:34.967372 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:20:34.972510 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:20:34.975700 systemd-logind[1570]: Removed session 16. Aug 13 00:20:40.145468 systemd[1]: Started sshd@17-138.201.175.117:22-139.178.89.65:47532.service - OpenSSH per-connection server daemon (139.178.89.65:47532). Aug 13 00:20:41.253244 sshd[6440]: Accepted publickey for core from 139.178.89.65 port 47532 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:20:41.257351 sshd[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:41.279255 systemd-logind[1570]: New session 17 of user core. Aug 13 00:20:41.284412 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:20:42.343766 sshd[6440]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:42.354933 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:20:42.355926 systemd[1]: sshd@17-138.201.175.117:22-139.178.89.65:47532.service: Deactivated successfully. Aug 13 00:20:42.371125 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:20:42.379366 systemd-logind[1570]: Removed session 17. Aug 13 00:20:47.509873 systemd[1]: Started sshd@18-138.201.175.117:22-139.178.89.65:47548.service - OpenSSH per-connection server daemon (139.178.89.65:47548). Aug 13 00:20:48.565759 sshd[6496]: Accepted publickey for core from 139.178.89.65 port 47548 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:20:48.570022 sshd[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:48.584763 systemd-logind[1570]: New session 18 of user core. Aug 13 00:20:48.590743 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:20:49.648786 sshd[6496]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:49.659912 systemd[1]: sshd@18-138.201.175.117:22-139.178.89.65:47548.service: Deactivated successfully. Aug 13 00:20:49.672602 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:20:49.677787 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:20:49.683463 systemd-logind[1570]: Removed session 18. Aug 13 00:20:54.828952 systemd[1]: Started sshd@19-138.201.175.117:22-139.178.89.65:56262.service - OpenSSH per-connection server daemon (139.178.89.65:56262). Aug 13 00:20:55.879548 sshd[6513]: Accepted publickey for core from 139.178.89.65 port 56262 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:20:55.882897 sshd[6513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:55.899373 systemd-logind[1570]: New session 19 of user core. Aug 13 00:20:55.906538 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:20:56.864100 sshd[6513]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:56.870718 systemd[1]: sshd@19-138.201.175.117:22-139.178.89.65:56262.service: Deactivated successfully. Aug 13 00:20:56.882511 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:20:56.883262 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:20:56.891618 systemd-logind[1570]: Removed session 19. Aug 13 00:21:02.095186 systemd[1]: Started sshd@20-138.201.175.117:22-139.178.89.65:52532.service - OpenSSH per-connection server daemon (139.178.89.65:52532). Aug 13 00:21:03.160105 sshd[6551]: Accepted publickey for core from 139.178.89.65 port 52532 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:21:03.165554 sshd[6551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:03.187390 systemd-logind[1570]: New session 20 of user core. Aug 13 00:21:03.197412 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:21:04.144608 sshd[6551]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:04.154558 systemd[1]: sshd@20-138.201.175.117:22-139.178.89.65:52532.service: Deactivated successfully. Aug 13 00:21:04.165053 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:21:04.168473 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:21:04.173966 systemd-logind[1570]: Removed session 20. Aug 13 00:21:09.335801 systemd[1]: Started sshd@21-138.201.175.117:22-139.178.89.65:34466.service - OpenSSH per-connection server daemon (139.178.89.65:34466). Aug 13 00:21:10.440249 sshd[6608]: Accepted publickey for core from 139.178.89.65 port 34466 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:21:10.445541 sshd[6608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:10.475571 systemd-logind[1570]: New session 21 of user core. Aug 13 00:21:10.481770 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:21:11.354615 sshd[6608]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:11.367674 systemd[1]: sshd@21-138.201.175.117:22-139.178.89.65:34466.service: Deactivated successfully. Aug 13 00:21:11.375709 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:21:11.377907 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:21:11.381040 systemd-logind[1570]: Removed session 21. Aug 13 00:21:16.522882 systemd[1]: Started sshd@22-138.201.175.117:22-139.178.89.65:34476.service - OpenSSH per-connection server daemon (139.178.89.65:34476). Aug 13 00:21:17.614427 sshd[6663]: Accepted publickey for core from 139.178.89.65 port 34476 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:21:17.618346 sshd[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:17.636151 systemd-logind[1570]: New session 22 of user core. Aug 13 00:21:17.642961 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:21:18.558680 sshd[6663]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:18.578954 systemd[1]: sshd@22-138.201.175.117:22-139.178.89.65:34476.service: Deactivated successfully. Aug 13 00:21:18.582907 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:21:18.598785 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:21:18.608852 systemd-logind[1570]: Removed session 22. Aug 13 00:21:23.728781 systemd[1]: Started sshd@23-138.201.175.117:22-139.178.89.65:37440.service - OpenSSH per-connection server daemon (139.178.89.65:37440). Aug 13 00:21:24.771185 sshd[6696]: Accepted publickey for core from 139.178.89.65 port 37440 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:21:24.776549 sshd[6696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:24.804480 systemd-logind[1570]: New session 23 of user core. Aug 13 00:21:24.810287 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:21:25.848716 sshd[6696]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:25.861560 systemd[1]: sshd@23-138.201.175.117:22-139.178.89.65:37440.service: Deactivated successfully. Aug 13 00:21:25.871479 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:21:25.873351 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:21:25.879542 systemd-logind[1570]: Removed session 23. Aug 13 00:21:31.019977 systemd[1]: Started sshd@24-138.201.175.117:22-139.178.89.65:57814.service - OpenSSH per-connection server daemon (139.178.89.65:57814). Aug 13 00:21:32.106743 sshd[6735]: Accepted publickey for core from 139.178.89.65 port 57814 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:21:32.114610 sshd[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:32.134662 systemd-logind[1570]: New session 24 of user core. Aug 13 00:21:32.148838 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:21:33.097765 sshd[6735]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:33.109305 systemd[1]: sshd@24-138.201.175.117:22-139.178.89.65:57814.service: Deactivated successfully. Aug 13 00:21:33.111323 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:21:33.123459 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:21:33.131122 systemd-logind[1570]: Removed session 24. Aug 13 00:21:38.268741 systemd[1]: Started sshd@25-138.201.175.117:22-139.178.89.65:57828.service - OpenSSH per-connection server daemon (139.178.89.65:57828). Aug 13 00:21:39.312690 sshd[6751]: Accepted publickey for core from 139.178.89.65 port 57828 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:21:39.316128 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:39.326725 systemd-logind[1570]: New session 25 of user core. Aug 13 00:21:39.333935 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:21:40.201941 sshd[6751]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:40.212698 systemd[1]: sshd@25-138.201.175.117:22-139.178.89.65:57828.service: Deactivated successfully. Aug 13 00:21:40.222918 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:21:40.225937 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:21:40.229582 systemd-logind[1570]: Removed session 25. Aug 13 00:21:45.377852 systemd[1]: Started sshd@26-138.201.175.117:22-139.178.89.65:40180.service - OpenSSH per-connection server daemon (139.178.89.65:40180). Aug 13 00:21:46.412183 sshd[6807]: Accepted publickey for core from 139.178.89.65 port 40180 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:21:46.417498 sshd[6807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:46.429903 systemd-logind[1570]: New session 26 of user core. Aug 13 00:21:46.442867 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:21:47.312409 sshd[6807]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:47.324987 systemd[1]: sshd@26-138.201.175.117:22-139.178.89.65:40180.service: Deactivated successfully. Aug 13 00:21:47.340521 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:21:47.344882 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:21:47.351857 systemd-logind[1570]: Removed session 26. Aug 13 00:21:52.501022 systemd[1]: Started sshd@27-138.201.175.117:22-139.178.89.65:48612.service - OpenSSH per-connection server daemon (139.178.89.65:48612). Aug 13 00:21:53.545732 sshd[6824]: Accepted publickey for core from 139.178.89.65 port 48612 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:21:53.555762 sshd[6824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:53.572853 systemd-logind[1570]: New session 27 of user core. Aug 13 00:21:53.581974 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:21:54.508489 sshd[6824]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:54.518403 systemd[1]: sshd@27-138.201.175.117:22-139.178.89.65:48612.service: Deactivated successfully. Aug 13 00:21:54.538153 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:21:54.543107 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:21:54.550563 systemd-logind[1570]: Removed session 27. Aug 13 00:21:58.146815 systemd[1]: Started sshd@28-138.201.175.117:22-178.62.108.116:60328.service - OpenSSH per-connection server daemon (178.62.108.116:60328). Aug 13 00:21:58.338175 sshd[6841]: Connection closed by authenticating user root 178.62.108.116 port 60328 [preauth] Aug 13 00:21:58.346885 systemd[1]: sshd@28-138.201.175.117:22-178.62.108.116:60328.service: Deactivated successfully. Aug 13 00:21:59.681784 systemd[1]: Started sshd@29-138.201.175.117:22-139.178.89.65:54550.service - OpenSSH per-connection server daemon (139.178.89.65:54550). Aug 13 00:22:00.768245 sshd[6869]: Accepted publickey for core from 139.178.89.65 port 54550 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:00.769871 sshd[6869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:00.790454 systemd-logind[1570]: New session 28 of user core. Aug 13 00:22:00.794850 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:22:01.825992 sshd[6869]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:01.840045 systemd[1]: sshd@29-138.201.175.117:22-139.178.89.65:54550.service: Deactivated successfully. Aug 13 00:22:01.854534 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:22:01.860902 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:22:01.864905 systemd-logind[1570]: Removed session 28. Aug 13 00:22:07.004021 systemd[1]: Started sshd@30-138.201.175.117:22-139.178.89.65:54558.service - OpenSSH per-connection server daemon (139.178.89.65:54558). Aug 13 00:22:08.086370 sshd[6903]: Accepted publickey for core from 139.178.89.65 port 54558 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:08.089657 sshd[6903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:08.104679 systemd-logind[1570]: New session 29 of user core. Aug 13 00:22:08.111830 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:22:09.004060 sshd[6903]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:09.015301 systemd-logind[1570]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:22:09.017322 systemd[1]: sshd@30-138.201.175.117:22-139.178.89.65:54558.service: Deactivated successfully. Aug 13 00:22:09.030876 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:22:09.038740 systemd-logind[1570]: Removed session 29. Aug 13 00:22:14.176934 systemd[1]: Started sshd@31-138.201.175.117:22-139.178.89.65:48754.service - OpenSSH per-connection server daemon (139.178.89.65:48754). Aug 13 00:22:15.216135 sshd[6958]: Accepted publickey for core from 139.178.89.65 port 48754 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:15.219981 sshd[6958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:15.231611 systemd-logind[1570]: New session 30 of user core. Aug 13 00:22:15.243218 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:22:16.099921 sshd[6958]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:16.111305 systemd-logind[1570]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:22:16.112285 systemd[1]: sshd@31-138.201.175.117:22-139.178.89.65:48754.service: Deactivated successfully. Aug 13 00:22:16.118725 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:22:16.121069 systemd-logind[1570]: Removed session 30. Aug 13 00:22:21.275757 systemd[1]: Started sshd@32-138.201.175.117:22-139.178.89.65:36670.service - OpenSSH per-connection server daemon (139.178.89.65:36670). Aug 13 00:22:22.314397 sshd[6974]: Accepted publickey for core from 139.178.89.65 port 36670 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:22.317824 sshd[6974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:22.333336 systemd-logind[1570]: New session 31 of user core. Aug 13 00:22:22.339452 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 00:22:23.251612 sshd[6974]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:23.268063 systemd[1]: sshd@32-138.201.175.117:22-139.178.89.65:36670.service: Deactivated successfully. Aug 13 00:22:23.280064 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 00:22:23.283182 systemd-logind[1570]: Session 31 logged out. Waiting for processes to exit. Aug 13 00:22:23.289583 systemd-logind[1570]: Removed session 31. Aug 13 00:22:28.423930 systemd[1]: Started sshd@33-138.201.175.117:22-139.178.89.65:36684.service - OpenSSH per-connection server daemon (139.178.89.65:36684). Aug 13 00:22:29.468406 sshd[7011]: Accepted publickey for core from 139.178.89.65 port 36684 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:29.472801 sshd[7011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:29.498633 systemd-logind[1570]: New session 32 of user core. Aug 13 00:22:29.507927 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 00:22:30.495154 sshd[7011]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:30.509361 systemd[1]: sshd@33-138.201.175.117:22-139.178.89.65:36684.service: Deactivated successfully. Aug 13 00:22:30.526659 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 00:22:30.530672 systemd-logind[1570]: Session 32 logged out. Waiting for processes to exit. Aug 13 00:22:30.540483 systemd-logind[1570]: Removed session 32. Aug 13 00:22:35.679669 systemd[1]: Started sshd@34-138.201.175.117:22-139.178.89.65:50668.service - OpenSSH per-connection server daemon (139.178.89.65:50668). Aug 13 00:22:36.768231 sshd[7047]: Accepted publickey for core from 139.178.89.65 port 50668 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:36.771328 sshd[7047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:36.781879 systemd-logind[1570]: New session 33 of user core. Aug 13 00:22:36.791940 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 00:22:37.666758 sshd[7047]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:37.685901 systemd-logind[1570]: Session 33 logged out. Waiting for processes to exit. Aug 13 00:22:37.687967 systemd[1]: sshd@34-138.201.175.117:22-139.178.89.65:50668.service: Deactivated successfully. Aug 13 00:22:37.697047 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 00:22:37.701701 systemd-logind[1570]: Removed session 33. Aug 13 00:22:42.840906 systemd[1]: Started sshd@35-138.201.175.117:22-139.178.89.65:38218.service - OpenSSH per-connection server daemon (139.178.89.65:38218). Aug 13 00:22:43.880433 sshd[7122]: Accepted publickey for core from 139.178.89.65 port 38218 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:43.884722 sshd[7122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:43.904458 systemd-logind[1570]: New session 34 of user core. Aug 13 00:22:43.917096 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 00:22:44.801860 sshd[7122]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:44.814174 systemd[1]: sshd@35-138.201.175.117:22-139.178.89.65:38218.service: Deactivated successfully. Aug 13 00:22:44.829077 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 00:22:44.833945 systemd-logind[1570]: Session 34 logged out. Waiting for processes to exit. Aug 13 00:22:44.840276 systemd-logind[1570]: Removed session 34. Aug 13 00:22:49.973821 systemd[1]: Started sshd@36-138.201.175.117:22-139.178.89.65:38912.service - OpenSSH per-connection server daemon (139.178.89.65:38912). Aug 13 00:22:51.016116 sshd[7139]: Accepted publickey for core from 139.178.89.65 port 38912 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:51.019639 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:51.032899 systemd-logind[1570]: New session 35 of user core. Aug 13 00:22:51.040320 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 00:22:51.895547 sshd[7139]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:51.905643 systemd[1]: sshd@36-138.201.175.117:22-139.178.89.65:38912.service: Deactivated successfully. Aug 13 00:22:51.914100 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 00:22:51.917751 systemd-logind[1570]: Session 35 logged out. Waiting for processes to exit. Aug 13 00:22:51.920573 systemd-logind[1570]: Removed session 35. Aug 13 00:22:57.078355 systemd[1]: Started sshd@37-138.201.175.117:22-139.178.89.65:38922.service - OpenSSH per-connection server daemon (139.178.89.65:38922). Aug 13 00:22:58.132052 sshd[7156]: Accepted publickey for core from 139.178.89.65 port 38922 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:22:58.136525 sshd[7156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:58.147459 systemd-logind[1570]: New session 36 of user core. Aug 13 00:22:58.158065 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 00:22:59.081583 sshd[7156]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:59.098030 systemd[1]: sshd@37-138.201.175.117:22-139.178.89.65:38922.service: Deactivated successfully. Aug 13 00:22:59.116526 systemd-logind[1570]: Session 36 logged out. Waiting for processes to exit. Aug 13 00:22:59.117404 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 00:22:59.126528 systemd-logind[1570]: Removed session 36. Aug 13 00:23:04.257770 systemd[1]: Started sshd@38-138.201.175.117:22-139.178.89.65:46044.service - OpenSSH per-connection server daemon (139.178.89.65:46044). Aug 13 00:23:05.346337 sshd[7195]: Accepted publickey for core from 139.178.89.65 port 46044 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:05.351000 sshd[7195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:05.375984 systemd-logind[1570]: New session 37 of user core. Aug 13 00:23:05.380783 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 00:23:06.331569 sshd[7195]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:06.340589 systemd-logind[1570]: Session 37 logged out. Waiting for processes to exit. Aug 13 00:23:06.341773 systemd[1]: sshd@38-138.201.175.117:22-139.178.89.65:46044.service: Deactivated successfully. Aug 13 00:23:06.355683 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 00:23:06.362569 systemd-logind[1570]: Removed session 37. Aug 13 00:23:11.533382 systemd[1]: Started sshd@39-138.201.175.117:22-139.178.89.65:36628.service - OpenSSH per-connection server daemon (139.178.89.65:36628). Aug 13 00:23:12.686049 sshd[7269]: Accepted publickey for core from 139.178.89.65 port 36628 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:12.688906 sshd[7269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:12.703164 systemd-logind[1570]: New session 38 of user core. Aug 13 00:23:12.709821 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 00:23:13.708758 sshd[7269]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:13.723230 systemd[1]: sshd@39-138.201.175.117:22-139.178.89.65:36628.service: Deactivated successfully. Aug 13 00:23:13.741147 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 00:23:13.741673 systemd-logind[1570]: Session 38 logged out. Waiting for processes to exit. Aug 13 00:23:13.753490 systemd-logind[1570]: Removed session 38. Aug 13 00:23:16.406044 systemd[1]: Started sshd@40-138.201.175.117:22-64.53.7.231:50652.service - OpenSSH per-connection server daemon (64.53.7.231:50652). Aug 13 00:23:18.566492 sshd[7284]: Invalid user admin from 64.53.7.231 port 50652 Aug 13 00:23:18.878508 systemd[1]: Started sshd@41-138.201.175.117:22-139.178.89.65:36640.service - OpenSSH per-connection server daemon (139.178.89.65:36640). Aug 13 00:23:19.330951 sshd[7294]: pam_faillock(sshd:auth): User unknown Aug 13 00:23:19.333971 sshd[7284]: Postponed keyboard-interactive for invalid user admin from 64.53.7.231 port 50652 ssh2 [preauth] Aug 13 00:23:19.783813 sshd[7294]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:23:19.783877 sshd[7294]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=64.53.7.231 Aug 13 00:23:19.788733 sshd[7294]: pam_faillock(sshd:auth): User unknown Aug 13 00:23:19.923656 sshd[7292]: Accepted publickey for core from 139.178.89.65 port 36640 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:19.929677 sshd[7292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:19.950323 systemd-logind[1570]: New session 39 of user core. Aug 13 00:23:19.958258 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 00:23:20.903759 sshd[7292]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:20.912060 systemd[1]: sshd@41-138.201.175.117:22-139.178.89.65:36640.service: Deactivated successfully. Aug 13 00:23:20.922674 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 00:23:20.926030 systemd-logind[1570]: Session 39 logged out. Waiting for processes to exit. Aug 13 00:23:20.928731 systemd-logind[1570]: Removed session 39. Aug 13 00:23:22.198321 sshd[7284]: PAM: Permission denied for illegal user admin from 64.53.7.231 Aug 13 00:23:22.200706 sshd[7284]: Failed keyboard-interactive/pam for invalid user admin from 64.53.7.231 port 50652 ssh2 Aug 13 00:23:22.631112 sshd[7284]: Connection closed by invalid user admin 64.53.7.231 port 50652 [preauth] Aug 13 00:23:22.637728 systemd[1]: sshd@40-138.201.175.117:22-64.53.7.231:50652.service: Deactivated successfully. Aug 13 00:23:26.079895 systemd[1]: Started sshd@42-138.201.175.117:22-139.178.89.65:35122.service - OpenSSH per-connection server daemon (139.178.89.65:35122). Aug 13 00:23:27.114352 sshd[7335]: Accepted publickey for core from 139.178.89.65 port 35122 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:27.124044 sshd[7335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:27.150646 systemd-logind[1570]: New session 40 of user core. Aug 13 00:23:27.156840 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 00:23:28.046583 sshd[7335]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:28.061285 systemd[1]: sshd@42-138.201.175.117:22-139.178.89.65:35122.service: Deactivated successfully. Aug 13 00:23:28.064327 systemd-logind[1570]: Session 40 logged out. Waiting for processes to exit. Aug 13 00:23:28.075601 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 00:23:28.079539 systemd-logind[1570]: Removed session 40. Aug 13 00:23:33.239322 systemd[1]: Started sshd@43-138.201.175.117:22-139.178.89.65:58518.service - OpenSSH per-connection server daemon (139.178.89.65:58518). Aug 13 00:23:34.343888 sshd[7372]: Accepted publickey for core from 139.178.89.65 port 58518 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:34.354725 sshd[7372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:34.378533 systemd-logind[1570]: New session 41 of user core. Aug 13 00:23:34.388934 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 00:23:35.309601 sshd[7372]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:35.331667 systemd-logind[1570]: Session 41 logged out. Waiting for processes to exit. Aug 13 00:23:35.332723 systemd[1]: sshd@43-138.201.175.117:22-139.178.89.65:58518.service: Deactivated successfully. Aug 13 00:23:35.346984 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 00:23:35.356499 systemd-logind[1570]: Removed session 41. Aug 13 00:23:40.507959 systemd[1]: Started sshd@44-138.201.175.117:22-139.178.89.65:52530.service - OpenSSH per-connection server daemon (139.178.89.65:52530). Aug 13 00:23:41.621155 sshd[7419]: Accepted publickey for core from 139.178.89.65 port 52530 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:41.624927 sshd[7419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:41.652367 systemd-logind[1570]: New session 42 of user core. Aug 13 00:23:41.682409 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 00:23:42.625498 sshd[7419]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:42.635057 systemd[1]: sshd@44-138.201.175.117:22-139.178.89.65:52530.service: Deactivated successfully. Aug 13 00:23:42.643978 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 00:23:42.647357 systemd-logind[1570]: Session 42 logged out. Waiting for processes to exit. Aug 13 00:23:42.650354 systemd-logind[1570]: Removed session 42. Aug 13 00:23:47.812778 systemd[1]: Started sshd@45-138.201.175.117:22-139.178.89.65:52536.service - OpenSSH per-connection server daemon (139.178.89.65:52536). Aug 13 00:23:48.932293 sshd[7442]: Accepted publickey for core from 139.178.89.65 port 52536 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:48.935380 sshd[7442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:48.950555 systemd-logind[1570]: New session 43 of user core. Aug 13 00:23:48.955289 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 00:23:50.114409 sshd[7442]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:50.124723 systemd-logind[1570]: Session 43 logged out. Waiting for processes to exit. Aug 13 00:23:50.133512 systemd[1]: sshd@45-138.201.175.117:22-139.178.89.65:52536.service: Deactivated successfully. Aug 13 00:23:50.144900 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 00:23:50.151019 systemd-logind[1570]: Removed session 43. Aug 13 00:23:50.278998 systemd[1]: Started sshd@46-138.201.175.117:22-139.178.89.65:39672.service - OpenSSH per-connection server daemon (139.178.89.65:39672). Aug 13 00:23:51.320720 sshd[7459]: Accepted publickey for core from 139.178.89.65 port 39672 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:51.328138 sshd[7459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:51.357324 systemd-logind[1570]: New session 44 of user core. Aug 13 00:23:51.361872 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 00:23:52.376817 sshd[7459]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:52.388168 systemd[1]: sshd@46-138.201.175.117:22-139.178.89.65:39672.service: Deactivated successfully. Aug 13 00:23:52.403317 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 00:23:52.406167 systemd-logind[1570]: Session 44 logged out. Waiting for processes to exit. Aug 13 00:23:52.410319 systemd-logind[1570]: Removed session 44. Aug 13 00:23:52.553800 systemd[1]: Started sshd@47-138.201.175.117:22-139.178.89.65:39676.service - OpenSSH per-connection server daemon (139.178.89.65:39676). Aug 13 00:23:53.598754 sshd[7471]: Accepted publickey for core from 139.178.89.65 port 39676 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:23:53.603073 sshd[7471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:53.614186 systemd-logind[1570]: New session 45 of user core. Aug 13 00:23:53.625146 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 00:23:54.535635 sshd[7471]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:54.545795 systemd-logind[1570]: Session 45 logged out. Waiting for processes to exit. Aug 13 00:23:54.546391 systemd[1]: sshd@47-138.201.175.117:22-139.178.89.65:39676.service: Deactivated successfully. Aug 13 00:23:54.555113 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 00:23:54.561948 systemd-logind[1570]: Removed session 45. Aug 13 00:23:59.731568 systemd[1]: Started sshd@48-138.201.175.117:22-139.178.89.65:37904.service - OpenSSH per-connection server daemon (139.178.89.65:37904). Aug 13 00:24:00.837258 sshd[7511]: Accepted publickey for core from 139.178.89.65 port 37904 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:00.847361 sshd[7511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:00.866248 systemd-logind[1570]: New session 46 of user core. Aug 13 00:24:00.880002 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 00:24:01.853138 sshd[7511]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:01.864120 systemd[1]: sshd@48-138.201.175.117:22-139.178.89.65:37904.service: Deactivated successfully. Aug 13 00:24:01.881537 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 00:24:01.885256 systemd-logind[1570]: Session 46 logged out. Waiting for processes to exit. Aug 13 00:24:01.889365 systemd-logind[1570]: Removed session 46. Aug 13 00:24:05.109537 systemd[1]: run-containerd-runc-k8s.io-3504e4edb445a952a747f178773c8b546f0525ac6180268a2a0f45b1dbf5b4f7-runc.iylAPZ.mount: Deactivated successfully. Aug 13 00:24:07.017976 systemd[1]: Started sshd@49-138.201.175.117:22-139.178.89.65:37916.service - OpenSSH per-connection server daemon (139.178.89.65:37916). Aug 13 00:24:08.082424 sshd[7544]: Accepted publickey for core from 139.178.89.65 port 37916 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:08.085699 sshd[7544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:08.109940 systemd-logind[1570]: New session 47 of user core. Aug 13 00:24:08.117817 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 00:24:09.149874 sshd[7544]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:09.165685 systemd[1]: sshd@49-138.201.175.117:22-139.178.89.65:37916.service: Deactivated successfully. Aug 13 00:24:09.180604 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 00:24:09.187890 systemd-logind[1570]: Session 47 logged out. Waiting for processes to exit. Aug 13 00:24:09.193809 systemd-logind[1570]: Removed session 47. Aug 13 00:24:14.326392 systemd[1]: Started sshd@50-138.201.175.117:22-139.178.89.65:35840.service - OpenSSH per-connection server daemon (139.178.89.65:35840). Aug 13 00:24:15.368587 sshd[7599]: Accepted publickey for core from 139.178.89.65 port 35840 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:15.378039 sshd[7599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:15.400555 systemd-logind[1570]: New session 48 of user core. Aug 13 00:24:15.404857 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 00:24:16.248039 sshd[7599]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:16.259640 systemd[1]: sshd@50-138.201.175.117:22-139.178.89.65:35840.service: Deactivated successfully. Aug 13 00:24:16.268649 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 00:24:16.272483 systemd-logind[1570]: Session 48 logged out. Waiting for processes to exit. Aug 13 00:24:16.275807 systemd-logind[1570]: Removed session 48. Aug 13 00:24:21.427296 systemd[1]: Started sshd@51-138.201.175.117:22-139.178.89.65:36246.service - OpenSSH per-connection server daemon (139.178.89.65:36246). Aug 13 00:24:22.468243 sshd[7635]: Accepted publickey for core from 139.178.89.65 port 36246 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:22.472858 sshd[7635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:22.494620 systemd-logind[1570]: New session 49 of user core. Aug 13 00:24:22.503146 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 00:24:23.387625 sshd[7635]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:23.401658 systemd[1]: sshd@51-138.201.175.117:22-139.178.89.65:36246.service: Deactivated successfully. Aug 13 00:24:23.411106 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 00:24:23.415130 systemd-logind[1570]: Session 49 logged out. Waiting for processes to exit. Aug 13 00:24:23.419353 systemd-logind[1570]: Removed session 49. Aug 13 00:24:28.581415 systemd[1]: Started sshd@52-138.201.175.117:22-139.178.89.65:36256.service - OpenSSH per-connection server daemon (139.178.89.65:36256). Aug 13 00:24:29.672031 sshd[7672]: Accepted publickey for core from 139.178.89.65 port 36256 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:29.675717 sshd[7672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:29.686404 systemd-logind[1570]: New session 50 of user core. Aug 13 00:24:29.693121 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 00:24:30.585707 sshd[7672]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:30.594112 systemd-logind[1570]: Session 50 logged out. Waiting for processes to exit. Aug 13 00:24:30.595223 systemd[1]: sshd@52-138.201.175.117:22-139.178.89.65:36256.service: Deactivated successfully. Aug 13 00:24:30.605256 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 00:24:30.608122 systemd-logind[1570]: Removed session 50. Aug 13 00:24:35.752985 systemd[1]: Started sshd@53-138.201.175.117:22-139.178.89.65:49878.service - OpenSSH per-connection server daemon (139.178.89.65:49878). Aug 13 00:24:36.795630 sshd[7709]: Accepted publickey for core from 139.178.89.65 port 49878 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:36.800007 sshd[7709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:36.811933 systemd-logind[1570]: New session 51 of user core. Aug 13 00:24:36.823463 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 00:24:37.816616 sshd[7709]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:37.831894 systemd[1]: sshd@53-138.201.175.117:22-139.178.89.65:49878.service: Deactivated successfully. Aug 13 00:24:37.842488 systemd-logind[1570]: Session 51 logged out. Waiting for processes to exit. Aug 13 00:24:37.843678 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 00:24:37.849422 systemd-logind[1570]: Removed session 51. Aug 13 00:24:43.004565 systemd[1]: Started sshd@54-138.201.175.117:22-139.178.89.65:47200.service - OpenSSH per-connection server daemon (139.178.89.65:47200). Aug 13 00:24:44.043068 sshd[7763]: Accepted publickey for core from 139.178.89.65 port 47200 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:44.047178 sshd[7763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:44.063395 systemd-logind[1570]: New session 52 of user core. Aug 13 00:24:44.070889 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 00:24:45.011460 sshd[7763]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:45.034388 systemd[1]: sshd@54-138.201.175.117:22-139.178.89.65:47200.service: Deactivated successfully. Aug 13 00:24:45.034472 systemd-logind[1570]: Session 52 logged out. Waiting for processes to exit. Aug 13 00:24:45.048764 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 00:24:45.053649 systemd-logind[1570]: Removed session 52. Aug 13 00:24:50.190425 systemd[1]: Started sshd@55-138.201.175.117:22-139.178.89.65:33660.service - OpenSSH per-connection server daemon (139.178.89.65:33660). Aug 13 00:24:51.227267 sshd[7779]: Accepted publickey for core from 139.178.89.65 port 33660 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:51.236845 sshd[7779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:51.265245 systemd-logind[1570]: New session 53 of user core. Aug 13 00:24:51.270419 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 00:24:52.175380 sshd[7779]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:52.187166 systemd[1]: sshd@55-138.201.175.117:22-139.178.89.65:33660.service: Deactivated successfully. Aug 13 00:24:52.200317 systemd-logind[1570]: Session 53 logged out. Waiting for processes to exit. Aug 13 00:24:52.200573 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 00:24:52.206645 systemd-logind[1570]: Removed session 53. Aug 13 00:24:57.353365 systemd[1]: Started sshd@56-138.201.175.117:22-139.178.89.65:33670.service - OpenSSH per-connection server daemon (139.178.89.65:33670). Aug 13 00:24:58.402756 sshd[7795]: Accepted publickey for core from 139.178.89.65 port 33670 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:24:58.405899 sshd[7795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:24:58.422026 systemd-logind[1570]: New session 54 of user core. Aug 13 00:24:58.428653 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 00:24:59.372073 sshd[7795]: pam_unix(sshd:session): session closed for user core Aug 13 00:24:59.397710 systemd[1]: sshd@56-138.201.175.117:22-139.178.89.65:33670.service: Deactivated successfully. Aug 13 00:24:59.414672 systemd-logind[1570]: Session 54 logged out. Waiting for processes to exit. Aug 13 00:24:59.416012 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 00:24:59.424870 systemd-logind[1570]: Removed session 54. Aug 13 00:25:04.543033 systemd[1]: Started sshd@57-138.201.175.117:22-139.178.89.65:54956.service - OpenSSH per-connection server daemon (139.178.89.65:54956). Aug 13 00:25:05.578288 sshd[7830]: Accepted publickey for core from 139.178.89.65 port 54956 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:25:05.582990 sshd[7830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:05.609584 systemd-logind[1570]: New session 55 of user core. Aug 13 00:25:05.612841 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 00:25:06.526322 sshd[7830]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:06.537357 systemd[1]: sshd@57-138.201.175.117:22-139.178.89.65:54956.service: Deactivated successfully. Aug 13 00:25:06.550195 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 00:25:06.558168 systemd-logind[1570]: Session 55 logged out. Waiting for processes to exit. Aug 13 00:25:06.561798 systemd-logind[1570]: Removed session 55. Aug 13 00:25:11.726422 systemd[1]: Started sshd@58-138.201.175.117:22-139.178.89.65:47416.service - OpenSSH per-connection server daemon (139.178.89.65:47416). Aug 13 00:25:12.822093 sshd[7904]: Accepted publickey for core from 139.178.89.65 port 47416 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:25:12.830738 sshd[7904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:12.860331 systemd-logind[1570]: New session 56 of user core. Aug 13 00:25:12.868801 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 00:25:13.797223 sshd[7904]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:13.811093 systemd[1]: sshd@58-138.201.175.117:22-139.178.89.65:47416.service: Deactivated successfully. Aug 13 00:25:13.828428 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 00:25:13.836664 systemd-logind[1570]: Session 56 logged out. Waiting for processes to exit. Aug 13 00:25:13.844704 systemd-logind[1570]: Removed session 56. Aug 13 00:25:18.964813 systemd[1]: Started sshd@59-138.201.175.117:22-139.178.89.65:47430.service - OpenSSH per-connection server daemon (139.178.89.65:47430). Aug 13 00:25:20.041541 sshd[7917]: Accepted publickey for core from 139.178.89.65 port 47430 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:25:20.046612 sshd[7917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:20.069694 systemd-logind[1570]: New session 57 of user core. Aug 13 00:25:20.075791 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 00:25:21.045031 sshd[7917]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:21.058478 systemd[1]: sshd@59-138.201.175.117:22-139.178.89.65:47430.service: Deactivated successfully. Aug 13 00:25:21.073397 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 00:25:21.079648 systemd-logind[1570]: Session 57 logged out. Waiting for processes to exit. Aug 13 00:25:21.084319 systemd-logind[1570]: Removed session 57. Aug 13 00:25:26.220864 systemd[1]: Started sshd@60-138.201.175.117:22-139.178.89.65:60106.service - OpenSSH per-connection server daemon (139.178.89.65:60106). Aug 13 00:25:27.263414 sshd[7952]: Accepted publickey for core from 139.178.89.65 port 60106 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:25:27.265619 sshd[7952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:27.276241 systemd-logind[1570]: New session 58 of user core. Aug 13 00:25:27.288460 systemd[1]: Started session-58.scope - Session 58 of User core. Aug 13 00:25:28.214560 sshd[7952]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:28.230171 systemd[1]: sshd@60-138.201.175.117:22-139.178.89.65:60106.service: Deactivated successfully. Aug 13 00:25:28.246781 systemd[1]: session-58.scope: Deactivated successfully. Aug 13 00:25:28.247553 systemd-logind[1570]: Session 58 logged out. Waiting for processes to exit. Aug 13 00:25:28.255810 systemd-logind[1570]: Removed session 58. Aug 13 00:25:33.388628 systemd[1]: Started sshd@61-138.201.175.117:22-139.178.89.65:49718.service - OpenSSH per-connection server daemon (139.178.89.65:49718). Aug 13 00:25:34.449144 sshd[7989]: Accepted publickey for core from 139.178.89.65 port 49718 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:25:34.454080 sshd[7989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:34.466354 systemd-logind[1570]: New session 59 of user core. Aug 13 00:25:34.472429 systemd[1]: Started session-59.scope - Session 59 of User core. Aug 13 00:25:35.328001 sshd[7989]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:35.337139 systemd[1]: sshd@61-138.201.175.117:22-139.178.89.65:49718.service: Deactivated successfully. Aug 13 00:25:35.346360 systemd[1]: session-59.scope: Deactivated successfully. Aug 13 00:25:35.348855 systemd-logind[1570]: Session 59 logged out. Waiting for processes to exit. Aug 13 00:25:35.352360 systemd-logind[1570]: Removed session 59. Aug 13 00:25:40.530427 systemd[1]: Started sshd@62-138.201.175.117:22-139.178.89.65:56194.service - OpenSSH per-connection server daemon (139.178.89.65:56194). Aug 13 00:25:41.602778 sshd[8038]: Accepted publickey for core from 139.178.89.65 port 56194 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:25:41.608388 sshd[8038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:41.625777 systemd-logind[1570]: New session 60 of user core. Aug 13 00:25:41.635004 systemd[1]: Started session-60.scope - Session 60 of User core. Aug 13 00:25:42.597748 sshd[8038]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:42.613868 systemd[1]: sshd@62-138.201.175.117:22-139.178.89.65:56194.service: Deactivated successfully. Aug 13 00:25:42.629115 systemd[1]: session-60.scope: Deactivated successfully. Aug 13 00:25:42.631670 systemd-logind[1570]: Session 60 logged out. Waiting for processes to exit. Aug 13 00:25:42.635002 systemd-logind[1570]: Removed session 60. Aug 13 00:25:47.789478 systemd[1]: Started sshd@63-138.201.175.117:22-139.178.89.65:56204.service - OpenSSH per-connection server daemon (139.178.89.65:56204). Aug 13 00:25:48.883373 sshd[8060]: Accepted publickey for core from 139.178.89.65 port 56204 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:25:48.888915 sshd[8060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:48.904018 systemd-logind[1570]: New session 61 of user core. Aug 13 00:25:48.912002 systemd[1]: Started session-61.scope - Session 61 of User core. Aug 13 00:25:49.985617 sshd[8060]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:50.013468 systemd[1]: sshd@63-138.201.175.117:22-139.178.89.65:56204.service: Deactivated successfully. Aug 13 00:25:50.029776 systemd[1]: session-61.scope: Deactivated successfully. Aug 13 00:25:50.032348 systemd-logind[1570]: Session 61 logged out. Waiting for processes to exit. Aug 13 00:25:50.041740 systemd-logind[1570]: Removed session 61. Aug 13 00:25:55.155648 systemd[1]: Started sshd@64-138.201.175.117:22-139.178.89.65:52990.service - OpenSSH per-connection server daemon (139.178.89.65:52990). Aug 13 00:25:56.198820 sshd[8097]: Accepted publickey for core from 139.178.89.65 port 52990 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:25:56.202456 sshd[8097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:25:56.214891 systemd-logind[1570]: New session 62 of user core. Aug 13 00:25:56.223865 systemd[1]: Started session-62.scope - Session 62 of User core. Aug 13 00:25:57.085710 sshd[8097]: pam_unix(sshd:session): session closed for user core Aug 13 00:25:57.095003 systemd[1]: sshd@64-138.201.175.117:22-139.178.89.65:52990.service: Deactivated successfully. Aug 13 00:25:57.108310 systemd-logind[1570]: Session 62 logged out. Waiting for processes to exit. Aug 13 00:25:57.108936 systemd[1]: session-62.scope: Deactivated successfully. Aug 13 00:25:57.112642 systemd-logind[1570]: Removed session 62. Aug 13 00:26:02.259903 systemd[1]: Started sshd@65-138.201.175.117:22-139.178.89.65:35008.service - OpenSSH per-connection server daemon (139.178.89.65:35008). Aug 13 00:26:03.334992 sshd[8136]: Accepted publickey for core from 139.178.89.65 port 35008 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:26:03.338987 sshd[8136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:03.354322 systemd-logind[1570]: New session 63 of user core. Aug 13 00:26:03.359759 systemd[1]: Started session-63.scope - Session 63 of User core. Aug 13 00:26:04.290620 sshd[8136]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:04.302325 systemd-logind[1570]: Session 63 logged out. Waiting for processes to exit. Aug 13 00:26:04.304693 systemd[1]: sshd@65-138.201.175.117:22-139.178.89.65:35008.service: Deactivated successfully. Aug 13 00:26:04.326631 systemd[1]: session-63.scope: Deactivated successfully. Aug 13 00:26:04.329978 systemd-logind[1570]: Removed session 63. Aug 13 00:26:09.479962 systemd[1]: Started sshd@66-138.201.175.117:22-139.178.89.65:33456.service - OpenSSH per-connection server daemon (139.178.89.65:33456). Aug 13 00:26:10.572168 sshd[8171]: Accepted publickey for core from 139.178.89.65 port 33456 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:26:10.580817 sshd[8171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:10.598763 systemd-logind[1570]: New session 64 of user core. Aug 13 00:26:10.607532 systemd[1]: Started session-64.scope - Session 64 of User core. Aug 13 00:26:11.501511 sshd[8171]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:11.511178 systemd[1]: sshd@66-138.201.175.117:22-139.178.89.65:33456.service: Deactivated successfully. Aug 13 00:26:11.520481 systemd-logind[1570]: Session 64 logged out. Waiting for processes to exit. Aug 13 00:26:11.521982 systemd[1]: session-64.scope: Deactivated successfully. Aug 13 00:26:11.525164 systemd-logind[1570]: Removed session 64. Aug 13 00:26:16.699372 systemd[1]: Started sshd@67-138.201.175.117:22-139.178.89.65:33458.service - OpenSSH per-connection server daemon (139.178.89.65:33458). Aug 13 00:26:17.820120 sshd[8227]: Accepted publickey for core from 139.178.89.65 port 33458 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:26:17.826783 sshd[8227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:17.855195 systemd-logind[1570]: New session 65 of user core. Aug 13 00:26:17.861155 systemd[1]: Started session-65.scope - Session 65 of User core. Aug 13 00:26:18.831750 sshd[8227]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:18.843339 systemd[1]: sshd@67-138.201.175.117:22-139.178.89.65:33458.service: Deactivated successfully. Aug 13 00:26:18.852920 systemd[1]: session-65.scope: Deactivated successfully. Aug 13 00:26:18.854943 systemd-logind[1570]: Session 65 logged out. Waiting for processes to exit. Aug 13 00:26:18.857843 systemd-logind[1570]: Removed session 65. Aug 13 00:26:24.004330 systemd[1]: Started sshd@68-138.201.175.117:22-139.178.89.65:55768.service - OpenSSH per-connection server daemon (139.178.89.65:55768). Aug 13 00:26:25.080679 sshd[8269]: Accepted publickey for core from 139.178.89.65 port 55768 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:26:25.084867 sshd[8269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:25.099726 systemd-logind[1570]: New session 66 of user core. Aug 13 00:26:25.107350 systemd[1]: Started session-66.scope - Session 66 of User core. Aug 13 00:26:26.034522 sshd[8269]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:26.055751 systemd-logind[1570]: Session 66 logged out. Waiting for processes to exit. Aug 13 00:26:26.058007 systemd[1]: sshd@68-138.201.175.117:22-139.178.89.65:55768.service: Deactivated successfully. Aug 13 00:26:26.072853 systemd[1]: session-66.scope: Deactivated successfully. Aug 13 00:26:26.080517 systemd-logind[1570]: Removed session 66. Aug 13 00:26:31.218439 systemd[1]: Started sshd@69-138.201.175.117:22-139.178.89.65:46948.service - OpenSSH per-connection server daemon (139.178.89.65:46948). Aug 13 00:26:32.278373 sshd[8307]: Accepted publickey for core from 139.178.89.65 port 46948 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:26:32.284360 sshd[8307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:32.298338 systemd-logind[1570]: New session 67 of user core. Aug 13 00:26:32.304849 systemd[1]: Started session-67.scope - Session 67 of User core. Aug 13 00:26:33.227549 sshd[8307]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:33.236073 systemd-logind[1570]: Session 67 logged out. Waiting for processes to exit. Aug 13 00:26:33.237783 systemd[1]: sshd@69-138.201.175.117:22-139.178.89.65:46948.service: Deactivated successfully. Aug 13 00:26:33.248638 systemd[1]: session-67.scope: Deactivated successfully. Aug 13 00:26:33.256431 systemd-logind[1570]: Removed session 67. Aug 13 00:26:38.428434 systemd[1]: Started sshd@70-138.201.175.117:22-139.178.89.65:46952.service - OpenSSH per-connection server daemon (139.178.89.65:46952). Aug 13 00:26:39.522557 sshd[8321]: Accepted publickey for core from 139.178.89.65 port 46952 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:26:39.527532 sshd[8321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:39.542795 systemd-logind[1570]: New session 68 of user core. Aug 13 00:26:39.548923 systemd[1]: Started session-68.scope - Session 68 of User core. Aug 13 00:26:40.831462 sshd[8321]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:40.853496 systemd[1]: sshd@70-138.201.175.117:22-139.178.89.65:46952.service: Deactivated successfully. Aug 13 00:26:40.856304 systemd-logind[1570]: Session 68 logged out. Waiting for processes to exit. Aug 13 00:26:40.874935 systemd[1]: session-68.scope: Deactivated successfully. Aug 13 00:26:40.889853 systemd-logind[1570]: Removed session 68. Aug 13 00:26:45.996548 systemd[1]: Started sshd@71-138.201.175.117:22-139.178.89.65:48584.service - OpenSSH per-connection server daemon (139.178.89.65:48584). Aug 13 00:26:47.049990 sshd[8377]: Accepted publickey for core from 139.178.89.65 port 48584 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:26:47.056932 sshd[8377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:47.072648 systemd-logind[1570]: New session 69 of user core. Aug 13 00:26:47.080908 systemd[1]: Started session-69.scope - Session 69 of User core. Aug 13 00:26:47.973675 sshd[8377]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:47.993176 systemd[1]: sshd@71-138.201.175.117:22-139.178.89.65:48584.service: Deactivated successfully. Aug 13 00:26:47.993988 systemd-logind[1570]: Session 69 logged out. Waiting for processes to exit. Aug 13 00:26:48.019528 systemd[1]: session-69.scope: Deactivated successfully. Aug 13 00:26:48.028695 systemd-logind[1570]: Removed session 69. Aug 13 00:26:53.162832 systemd[1]: Started sshd@72-138.201.175.117:22-139.178.89.65:53712.service - OpenSSH per-connection server daemon (139.178.89.65:53712). Aug 13 00:26:54.268144 sshd[8393]: Accepted publickey for core from 139.178.89.65 port 53712 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:26:54.272875 sshd[8393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:26:54.287504 systemd-logind[1570]: New session 70 of user core. Aug 13 00:26:54.293359 systemd[1]: Started session-70.scope - Session 70 of User core. Aug 13 00:26:55.290366 sshd[8393]: pam_unix(sshd:session): session closed for user core Aug 13 00:26:55.308495 systemd[1]: sshd@72-138.201.175.117:22-139.178.89.65:53712.service: Deactivated successfully. Aug 13 00:26:55.319386 systemd-logind[1570]: Session 70 logged out. Waiting for processes to exit. Aug 13 00:26:55.320496 systemd[1]: session-70.scope: Deactivated successfully. Aug 13 00:26:55.329144 systemd-logind[1570]: Removed session 70. Aug 13 00:27:00.456045 systemd[1]: Started sshd@73-138.201.175.117:22-139.178.89.65:56040.service - OpenSSH per-connection server daemon (139.178.89.65:56040). Aug 13 00:27:01.490229 sshd[8429]: Accepted publickey for core from 139.178.89.65 port 56040 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:01.497461 sshd[8429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:01.522417 systemd-logind[1570]: New session 71 of user core. Aug 13 00:27:01.526680 systemd[1]: Started session-71.scope - Session 71 of User core. Aug 13 00:27:02.378823 sshd[8429]: pam_unix(sshd:session): session closed for user core Aug 13 00:27:02.390925 systemd[1]: sshd@73-138.201.175.117:22-139.178.89.65:56040.service: Deactivated successfully. Aug 13 00:27:02.400945 systemd[1]: session-71.scope: Deactivated successfully. Aug 13 00:27:02.404108 systemd-logind[1570]: Session 71 logged out. Waiting for processes to exit. Aug 13 00:27:02.407243 systemd-logind[1570]: Removed session 71. Aug 13 00:27:07.555680 systemd[1]: Started sshd@74-138.201.175.117:22-139.178.89.65:56042.service - OpenSSH per-connection server daemon (139.178.89.65:56042). Aug 13 00:27:08.590252 sshd[8463]: Accepted publickey for core from 139.178.89.65 port 56042 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:08.596030 sshd[8463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:08.617311 systemd-logind[1570]: New session 72 of user core. Aug 13 00:27:08.621664 systemd[1]: Started session-72.scope - Session 72 of User core. Aug 13 00:27:09.553686 sshd[8463]: pam_unix(sshd:session): session closed for user core Aug 13 00:27:09.567738 systemd-logind[1570]: Session 72 logged out. Waiting for processes to exit. Aug 13 00:27:09.570185 systemd[1]: sshd@74-138.201.175.117:22-139.178.89.65:56042.service: Deactivated successfully. Aug 13 00:27:09.584115 systemd[1]: session-72.scope: Deactivated successfully. Aug 13 00:27:09.592988 systemd-logind[1570]: Removed session 72. Aug 13 00:27:14.732335 systemd[1]: Started sshd@75-138.201.175.117:22-139.178.89.65:60118.service - OpenSSH per-connection server daemon (139.178.89.65:60118). Aug 13 00:27:15.769235 sshd[8514]: Accepted publickey for core from 139.178.89.65 port 60118 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:15.772534 sshd[8514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:15.785887 systemd-logind[1570]: New session 73 of user core. Aug 13 00:27:15.798608 systemd[1]: Started session-73.scope - Session 73 of User core. Aug 13 00:27:16.667481 sshd[8514]: pam_unix(sshd:session): session closed for user core Aug 13 00:27:16.678488 systemd[1]: sshd@75-138.201.175.117:22-139.178.89.65:60118.service: Deactivated successfully. Aug 13 00:27:16.686867 systemd[1]: session-73.scope: Deactivated successfully. Aug 13 00:27:16.689521 systemd-logind[1570]: Session 73 logged out. Waiting for processes to exit. Aug 13 00:27:16.692594 systemd-logind[1570]: Removed session 73. Aug 13 00:27:21.864453 systemd[1]: Started sshd@76-138.201.175.117:22-139.178.89.65:59698.service - OpenSSH per-connection server daemon (139.178.89.65:59698). Aug 13 00:27:22.984817 sshd[8546]: Accepted publickey for core from 139.178.89.65 port 59698 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:22.989599 sshd[8546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:23.000276 systemd-logind[1570]: New session 74 of user core. Aug 13 00:27:23.009845 systemd[1]: Started session-74.scope - Session 74 of User core. Aug 13 00:27:24.085329 sshd[8546]: pam_unix(sshd:session): session closed for user core Aug 13 00:27:24.095830 systemd[1]: sshd@76-138.201.175.117:22-139.178.89.65:59698.service: Deactivated successfully. Aug 13 00:27:24.109282 systemd-logind[1570]: Session 74 logged out. Waiting for processes to exit. Aug 13 00:27:24.110656 systemd[1]: session-74.scope: Deactivated successfully. Aug 13 00:27:24.122682 systemd-logind[1570]: Removed session 74. Aug 13 00:27:29.261762 systemd[1]: Started sshd@77-138.201.175.117:22-139.178.89.65:40602.service - OpenSSH per-connection server daemon (139.178.89.65:40602). Aug 13 00:27:30.298996 sshd[8583]: Accepted publickey for core from 139.178.89.65 port 40602 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:30.303334 sshd[8583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:30.316636 systemd-logind[1570]: New session 75 of user core. Aug 13 00:27:30.328403 systemd[1]: Started session-75.scope - Session 75 of User core. Aug 13 00:27:31.394233 sshd[8583]: pam_unix(sshd:session): session closed for user core Aug 13 00:27:31.405092 systemd[1]: sshd@77-138.201.175.117:22-139.178.89.65:40602.service: Deactivated successfully. Aug 13 00:27:31.425795 systemd[1]: session-75.scope: Deactivated successfully. Aug 13 00:27:31.430419 systemd-logind[1570]: Session 75 logged out. Waiting for processes to exit. Aug 13 00:27:31.435045 systemd-logind[1570]: Removed session 75. Aug 13 00:27:36.585026 systemd[1]: Started sshd@78-138.201.175.117:22-139.178.89.65:40608.service - OpenSSH per-connection server daemon (139.178.89.65:40608). Aug 13 00:27:37.676903 sshd[8620]: Accepted publickey for core from 139.178.89.65 port 40608 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:37.680582 sshd[8620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:37.691367 systemd-logind[1570]: New session 76 of user core. Aug 13 00:27:37.701876 systemd[1]: Started session-76.scope - Session 76 of User core. Aug 13 00:27:38.738669 sshd[8620]: pam_unix(sshd:session): session closed for user core Aug 13 00:27:38.750982 systemd[1]: sshd@78-138.201.175.117:22-139.178.89.65:40608.service: Deactivated successfully. Aug 13 00:27:38.766912 systemd[1]: session-76.scope: Deactivated successfully. Aug 13 00:27:38.769679 systemd-logind[1570]: Session 76 logged out. Waiting for processes to exit. Aug 13 00:27:38.773043 systemd-logind[1570]: Removed session 76. Aug 13 00:27:43.902111 systemd[1]: Started sshd@79-138.201.175.117:22-139.178.89.65:40234.service - OpenSSH per-connection server daemon (139.178.89.65:40234). Aug 13 00:27:44.940107 sshd[8677]: Accepted publickey for core from 139.178.89.65 port 40234 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:44.945609 sshd[8677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:44.971353 systemd-logind[1570]: New session 77 of user core. Aug 13 00:27:44.979683 systemd[1]: Started session-77.scope - Session 77 of User core. Aug 13 00:27:45.980950 sshd[8677]: pam_unix(sshd:session): session closed for user core Aug 13 00:27:46.001897 systemd[1]: sshd@79-138.201.175.117:22-139.178.89.65:40234.service: Deactivated successfully. Aug 13 00:27:46.002280 systemd-logind[1570]: Session 77 logged out. Waiting for processes to exit. Aug 13 00:27:46.024589 systemd[1]: session-77.scope: Deactivated successfully. Aug 13 00:27:46.037296 systemd-logind[1570]: Removed session 77. Aug 13 00:27:51.144811 systemd[1]: Started sshd@80-138.201.175.117:22-139.178.89.65:54424.service - OpenSSH per-connection server daemon (139.178.89.65:54424). Aug 13 00:27:52.200481 sshd[8693]: Accepted publickey for core from 139.178.89.65 port 54424 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:52.204466 sshd[8693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:52.225639 systemd-logind[1570]: New session 78 of user core. Aug 13 00:27:52.235996 systemd[1]: Started session-78.scope - Session 78 of User core. Aug 13 00:27:53.132716 sshd[8693]: pam_unix(sshd:session): session closed for user core Aug 13 00:27:53.150950 systemd[1]: sshd@80-138.201.175.117:22-139.178.89.65:54424.service: Deactivated successfully. Aug 13 00:27:53.165175 systemd[1]: session-78.scope: Deactivated successfully. Aug 13 00:27:53.169978 systemd-logind[1570]: Session 78 logged out. Waiting for processes to exit. Aug 13 00:27:53.176619 systemd-logind[1570]: Removed session 78. Aug 13 00:27:58.324090 systemd[1]: Started sshd@81-138.201.175.117:22-139.178.89.65:54430.service - OpenSSH per-connection server daemon (139.178.89.65:54430). Aug 13 00:27:59.452075 sshd[8710]: Accepted publickey for core from 139.178.89.65 port 54430 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:27:59.463590 sshd[8710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:27:59.483766 systemd-logind[1570]: New session 79 of user core. Aug 13 00:27:59.493515 systemd[1]: Started session-79.scope - Session 79 of User core. Aug 13 00:28:00.493856 sshd[8710]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:00.505524 systemd[1]: sshd@81-138.201.175.117:22-139.178.89.65:54430.service: Deactivated successfully. Aug 13 00:28:00.506328 systemd-logind[1570]: Session 79 logged out. Waiting for processes to exit. Aug 13 00:28:00.522957 systemd[1]: session-79.scope: Deactivated successfully. Aug 13 00:28:00.534423 systemd-logind[1570]: Removed session 79. Aug 13 00:28:04.310538 systemd[1]: Started sshd@82-138.201.175.117:22-65.20.251.127:58890.service - OpenSSH per-connection server daemon (65.20.251.127:58890). Aug 13 00:28:05.658561 systemd[1]: Started sshd@83-138.201.175.117:22-139.178.89.65:59584.service - OpenSSH per-connection server daemon (139.178.89.65:59584). Aug 13 00:28:06.299640 sshd[8746]: Invalid user admin from 65.20.251.127 port 58890 Aug 13 00:28:06.716063 sshd[8769]: Accepted publickey for core from 139.178.89.65 port 59584 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:06.720062 sshd[8769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:06.734801 systemd-logind[1570]: New session 80 of user core. Aug 13 00:28:06.743042 systemd[1]: Started session-80.scope - Session 80 of User core. Aug 13 00:28:06.915033 sshd[8773]: pam_faillock(sshd:auth): User unknown Aug 13 00:28:06.922048 sshd[8746]: Postponed keyboard-interactive for invalid user admin from 65.20.251.127 port 58890 ssh2 [preauth] Aug 13 00:28:07.526822 sshd[8773]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:28:07.526880 sshd[8773]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=65.20.251.127 Aug 13 00:28:07.530998 sshd[8773]: pam_faillock(sshd:auth): User unknown Aug 13 00:28:07.820179 sshd[8769]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:07.833790 systemd[1]: sshd@83-138.201.175.117:22-139.178.89.65:59584.service: Deactivated successfully. Aug 13 00:28:07.835414 systemd-logind[1570]: Session 80 logged out. Waiting for processes to exit. Aug 13 00:28:07.852171 systemd[1]: session-80.scope: Deactivated successfully. Aug 13 00:28:07.861426 systemd-logind[1570]: Removed session 80. Aug 13 00:28:07.991055 systemd[1]: Started sshd@84-138.201.175.117:22-139.178.89.65:59592.service - OpenSSH per-connection server daemon (139.178.89.65:59592). Aug 13 00:28:09.024808 sshd[8785]: Accepted publickey for core from 139.178.89.65 port 59592 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:09.028132 sshd[8785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:09.041058 systemd-logind[1570]: New session 81 of user core. Aug 13 00:28:09.046912 systemd[1]: Started session-81.scope - Session 81 of User core. Aug 13 00:28:10.006455 sshd[8746]: PAM: Permission denied for illegal user admin from 65.20.251.127 Aug 13 00:28:10.008495 sshd[8746]: Failed keyboard-interactive/pam for invalid user admin from 65.20.251.127 port 58890 ssh2 Aug 13 00:28:10.155645 sshd[8785]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:10.172146 systemd[1]: sshd@84-138.201.175.117:22-139.178.89.65:59592.service: Deactivated successfully. Aug 13 00:28:10.183717 systemd-logind[1570]: Session 81 logged out. Waiting for processes to exit. Aug 13 00:28:10.184156 systemd[1]: session-81.scope: Deactivated successfully. Aug 13 00:28:10.189416 systemd-logind[1570]: Removed session 81. Aug 13 00:28:10.314852 sshd[8746]: Connection closed by invalid user admin 65.20.251.127 port 58890 [preauth] Aug 13 00:28:10.321742 systemd[1]: Started sshd@85-138.201.175.117:22-139.178.89.65:51058.service - OpenSSH per-connection server daemon (139.178.89.65:51058). Aug 13 00:28:10.324992 systemd[1]: sshd@82-138.201.175.117:22-65.20.251.127:58890.service: Deactivated successfully. Aug 13 00:28:10.527558 systemd[1]: run-containerd-runc-k8s.io-1b2c9ca85f104fa79d48fbea3aa7ff483968f9aa677492f770970dbd1b634e57-runc.uK9YP3.mount: Deactivated successfully. Aug 13 00:28:11.369788 sshd[8798]: Accepted publickey for core from 139.178.89.65 port 51058 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:11.373547 sshd[8798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:11.392937 systemd-logind[1570]: New session 82 of user core. Aug 13 00:28:11.398866 systemd[1]: Started session-82.scope - Session 82 of User core. Aug 13 00:28:17.577710 sshd[8798]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:17.593991 systemd[1]: sshd@85-138.201.175.117:22-139.178.89.65:51058.service: Deactivated successfully. Aug 13 00:28:17.604424 systemd[1]: session-82.scope: Deactivated successfully. Aug 13 00:28:17.604683 systemd-logind[1570]: Session 82 logged out. Waiting for processes to exit. Aug 13 00:28:17.613751 systemd-logind[1570]: Removed session 82. Aug 13 00:28:17.752405 systemd[1]: Started sshd@86-138.201.175.117:22-139.178.89.65:51068.service - OpenSSH per-connection server daemon (139.178.89.65:51068). Aug 13 00:28:18.813336 sshd[8860]: Accepted publickey for core from 139.178.89.65 port 51068 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:18.819718 sshd[8860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:18.843300 systemd-logind[1570]: New session 83 of user core. Aug 13 00:28:18.856539 systemd[1]: Started session-83.scope - Session 83 of User core. Aug 13 00:28:20.189612 sshd[8860]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:20.211333 systemd[1]: sshd@86-138.201.175.117:22-139.178.89.65:51068.service: Deactivated successfully. Aug 13 00:28:20.213533 systemd-logind[1570]: Session 83 logged out. Waiting for processes to exit. Aug 13 00:28:20.228531 systemd[1]: session-83.scope: Deactivated successfully. Aug 13 00:28:20.238400 systemd-logind[1570]: Removed session 83. Aug 13 00:28:20.373152 systemd[1]: Started sshd@87-138.201.175.117:22-139.178.89.65:33830.service - OpenSSH per-connection server daemon (139.178.89.65:33830). Aug 13 00:28:21.416240 sshd[8873]: Accepted publickey for core from 139.178.89.65 port 33830 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:21.418387 sshd[8873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:21.437651 systemd-logind[1570]: New session 84 of user core. Aug 13 00:28:21.446253 systemd[1]: Started session-84.scope - Session 84 of User core. Aug 13 00:28:22.593117 sshd[8873]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:22.605614 systemd[1]: sshd@87-138.201.175.117:22-139.178.89.65:33830.service: Deactivated successfully. Aug 13 00:28:22.606052 systemd-logind[1570]: Session 84 logged out. Waiting for processes to exit. Aug 13 00:28:22.629834 systemd[1]: session-84.scope: Deactivated successfully. Aug 13 00:28:22.633777 systemd-logind[1570]: Removed session 84. Aug 13 00:28:27.771944 systemd[1]: Started sshd@88-138.201.175.117:22-139.178.89.65:33832.service - OpenSSH per-connection server daemon (139.178.89.65:33832). Aug 13 00:28:28.842429 sshd[8907]: Accepted publickey for core from 139.178.89.65 port 33832 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:28.848581 sshd[8907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:28.878357 systemd-logind[1570]: New session 85 of user core. Aug 13 00:28:28.879786 systemd[1]: Started session-85.scope - Session 85 of User core. Aug 13 00:28:29.867875 sshd[8907]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:29.882171 systemd[1]: sshd@88-138.201.175.117:22-139.178.89.65:33832.service: Deactivated successfully. Aug 13 00:28:29.893176 systemd[1]: session-85.scope: Deactivated successfully. Aug 13 00:28:29.896443 systemd-logind[1570]: Session 85 logged out. Waiting for processes to exit. Aug 13 00:28:29.904566 systemd-logind[1570]: Removed session 85. Aug 13 00:28:35.044529 systemd[1]: Started sshd@89-138.201.175.117:22-139.178.89.65:37370.service - OpenSSH per-connection server daemon (139.178.89.65:37370). Aug 13 00:28:36.094637 sshd[8944]: Accepted publickey for core from 139.178.89.65 port 37370 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:36.101102 sshd[8944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:36.124733 systemd-logind[1570]: New session 86 of user core. Aug 13 00:28:36.159825 systemd[1]: Started session-86.scope - Session 86 of User core. Aug 13 00:28:37.002798 sshd[8944]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:37.012947 systemd[1]: sshd@89-138.201.175.117:22-139.178.89.65:37370.service: Deactivated successfully. Aug 13 00:28:37.032620 systemd-logind[1570]: Session 86 logged out. Waiting for processes to exit. Aug 13 00:28:37.038152 systemd[1]: session-86.scope: Deactivated successfully. Aug 13 00:28:37.047968 systemd-logind[1570]: Removed session 86. Aug 13 00:28:42.175676 systemd[1]: Started sshd@90-138.201.175.117:22-139.178.89.65:39598.service - OpenSSH per-connection server daemon (139.178.89.65:39598). Aug 13 00:28:43.201816 sshd[9004]: Accepted publickey for core from 139.178.89.65 port 39598 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:43.206125 sshd[9004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:43.218698 systemd-logind[1570]: New session 87 of user core. Aug 13 00:28:43.226926 systemd[1]: Started session-87.scope - Session 87 of User core. Aug 13 00:28:44.073819 sshd[9004]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:44.084107 systemd[1]: sshd@90-138.201.175.117:22-139.178.89.65:39598.service: Deactivated successfully. Aug 13 00:28:44.091144 systemd[1]: session-87.scope: Deactivated successfully. Aug 13 00:28:44.094846 systemd-logind[1570]: Session 87 logged out. Waiting for processes to exit. Aug 13 00:28:44.097582 systemd-logind[1570]: Removed session 87. Aug 13 00:28:44.765316 systemd[1]: Started sshd@91-138.201.175.117:22-178.62.108.116:37944.service - OpenSSH per-connection server daemon (178.62.108.116:37944). Aug 13 00:28:44.898003 sshd[9018]: Connection closed by authenticating user root 178.62.108.116 port 37944 [preauth] Aug 13 00:28:44.900905 systemd[1]: sshd@91-138.201.175.117:22-178.62.108.116:37944.service: Deactivated successfully. Aug 13 00:28:49.248262 systemd[1]: Started sshd@92-138.201.175.117:22-139.178.89.65:51570.service - OpenSSH per-connection server daemon (139.178.89.65:51570). Aug 13 00:28:50.308548 sshd[9024]: Accepted publickey for core from 139.178.89.65 port 51570 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:50.310918 sshd[9024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:50.334153 systemd-logind[1570]: New session 88 of user core. Aug 13 00:28:50.340904 systemd[1]: Started session-88.scope - Session 88 of User core. Aug 13 00:28:51.285598 sshd[9024]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:51.301338 systemd[1]: sshd@92-138.201.175.117:22-139.178.89.65:51570.service: Deactivated successfully. Aug 13 00:28:51.309923 systemd[1]: session-88.scope: Deactivated successfully. Aug 13 00:28:51.312438 systemd-logind[1570]: Session 88 logged out. Waiting for processes to exit. Aug 13 00:28:51.319078 systemd-logind[1570]: Removed session 88. Aug 13 00:28:56.462950 systemd[1]: Started sshd@93-138.201.175.117:22-139.178.89.65:51580.service - OpenSSH per-connection server daemon (139.178.89.65:51580). Aug 13 00:28:57.530157 sshd[9049]: Accepted publickey for core from 139.178.89.65 port 51580 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:28:57.536064 sshd[9049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:28:57.548817 systemd-logind[1570]: New session 89 of user core. Aug 13 00:28:57.555090 systemd[1]: Started session-89.scope - Session 89 of User core. Aug 13 00:28:58.505392 sshd[9049]: pam_unix(sshd:session): session closed for user core Aug 13 00:28:58.518934 systemd[1]: sshd@93-138.201.175.117:22-139.178.89.65:51580.service: Deactivated successfully. Aug 13 00:28:58.533079 systemd[1]: session-89.scope: Deactivated successfully. Aug 13 00:28:58.537084 systemd-logind[1570]: Session 89 logged out. Waiting for processes to exit. Aug 13 00:28:58.544038 systemd-logind[1570]: Removed session 89. Aug 13 00:29:03.676693 systemd[1]: Started sshd@94-138.201.175.117:22-139.178.89.65:52348.service - OpenSSH per-connection server daemon (139.178.89.65:52348). Aug 13 00:29:04.709438 sshd[9097]: Accepted publickey for core from 139.178.89.65 port 52348 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:29:04.715552 sshd[9097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:04.743245 systemd-logind[1570]: New session 90 of user core. Aug 13 00:29:04.751935 systemd[1]: Started session-90.scope - Session 90 of User core. Aug 13 00:29:05.161324 systemd[1]: run-containerd-runc-k8s.io-3504e4edb445a952a747f178773c8b546f0525ac6180268a2a0f45b1dbf5b4f7-runc.tddEud.mount: Deactivated successfully. Aug 13 00:29:05.762716 sshd[9097]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:05.773665 systemd[1]: sshd@94-138.201.175.117:22-139.178.89.65:52348.service: Deactivated successfully. Aug 13 00:29:05.785888 systemd-logind[1570]: Session 90 logged out. Waiting for processes to exit. Aug 13 00:29:05.789820 systemd[1]: session-90.scope: Deactivated successfully. Aug 13 00:29:05.797045 systemd-logind[1570]: Removed session 90. Aug 13 00:29:10.942470 systemd[1]: Started sshd@95-138.201.175.117:22-139.178.89.65:50266.service - OpenSSH per-connection server daemon (139.178.89.65:50266). Aug 13 00:29:11.987795 sshd[9169]: Accepted publickey for core from 139.178.89.65 port 50266 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:29:11.991534 sshd[9169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:12.007126 systemd-logind[1570]: New session 91 of user core. Aug 13 00:29:12.010951 systemd[1]: Started session-91.scope - Session 91 of User core. Aug 13 00:29:12.886622 sshd[9169]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:12.899740 systemd[1]: sshd@95-138.201.175.117:22-139.178.89.65:50266.service: Deactivated successfully. Aug 13 00:29:12.916530 systemd[1]: session-91.scope: Deactivated successfully. Aug 13 00:29:12.920284 systemd-logind[1570]: Session 91 logged out. Waiting for processes to exit. Aug 13 00:29:12.933343 systemd-logind[1570]: Removed session 91. Aug 13 00:29:18.057880 systemd[1]: Started sshd@96-138.201.175.117:22-139.178.89.65:50274.service - OpenSSH per-connection server daemon (139.178.89.65:50274). Aug 13 00:29:19.092774 sshd[9183]: Accepted publickey for core from 139.178.89.65 port 50274 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:29:19.096386 sshd[9183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:19.107460 systemd-logind[1570]: New session 92 of user core. Aug 13 00:29:19.121819 systemd[1]: Started session-92.scope - Session 92 of User core. Aug 13 00:29:19.965983 sshd[9183]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:19.975593 systemd[1]: sshd@96-138.201.175.117:22-139.178.89.65:50274.service: Deactivated successfully. Aug 13 00:29:19.983835 systemd[1]: session-92.scope: Deactivated successfully. Aug 13 00:29:19.987591 systemd-logind[1570]: Session 92 logged out. Waiting for processes to exit. Aug 13 00:29:19.989976 systemd-logind[1570]: Removed session 92. Aug 13 00:29:25.139978 systemd[1]: Started sshd@97-138.201.175.117:22-139.178.89.65:60424.service - OpenSSH per-connection server daemon (139.178.89.65:60424). Aug 13 00:29:26.179286 sshd[9217]: Accepted publickey for core from 139.178.89.65 port 60424 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:29:26.183667 sshd[9217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:26.198482 systemd-logind[1570]: New session 93 of user core. Aug 13 00:29:26.210549 systemd[1]: Started session-93.scope - Session 93 of User core. Aug 13 00:29:27.052113 sshd[9217]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:27.061193 systemd[1]: sshd@97-138.201.175.117:22-139.178.89.65:60424.service: Deactivated successfully. Aug 13 00:29:27.070421 systemd-logind[1570]: Session 93 logged out. Waiting for processes to exit. Aug 13 00:29:27.071684 systemd[1]: session-93.scope: Deactivated successfully. Aug 13 00:29:27.076691 systemd-logind[1570]: Removed session 93. Aug 13 00:29:32.229968 systemd[1]: Started sshd@98-138.201.175.117:22-139.178.89.65:51946.service - OpenSSH per-connection server daemon (139.178.89.65:51946). Aug 13 00:29:33.279886 sshd[9255]: Accepted publickey for core from 139.178.89.65 port 51946 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:29:33.284558 sshd[9255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:33.296390 systemd-logind[1570]: New session 94 of user core. Aug 13 00:29:33.304302 systemd[1]: Started session-94.scope - Session 94 of User core. Aug 13 00:29:34.247488 sshd[9255]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:34.258831 systemd-logind[1570]: Session 94 logged out. Waiting for processes to exit. Aug 13 00:29:34.259075 systemd[1]: sshd@98-138.201.175.117:22-139.178.89.65:51946.service: Deactivated successfully. Aug 13 00:29:34.272489 systemd[1]: session-94.scope: Deactivated successfully. Aug 13 00:29:34.278922 systemd-logind[1570]: Removed session 94. Aug 13 00:29:39.437314 systemd[1]: Started sshd@99-138.201.175.117:22-139.178.89.65:33428.service - OpenSSH per-connection server daemon (139.178.89.65:33428). Aug 13 00:29:40.536922 sshd[9269]: Accepted publickey for core from 139.178.89.65 port 33428 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:29:40.543663 sshd[9269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:40.565542 systemd-logind[1570]: New session 95 of user core. Aug 13 00:29:40.572187 systemd[1]: Started session-95.scope - Session 95 of User core. Aug 13 00:29:41.542454 sshd[9269]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:41.558837 systemd[1]: sshd@99-138.201.175.117:22-139.178.89.65:33428.service: Deactivated successfully. Aug 13 00:29:41.573885 systemd-logind[1570]: Session 95 logged out. Waiting for processes to exit. Aug 13 00:29:41.575478 systemd[1]: session-95.scope: Deactivated successfully. Aug 13 00:29:41.583137 systemd-logind[1570]: Removed session 95. Aug 13 00:29:46.838449 systemd[1]: Started sshd@100-138.201.175.117:22-139.178.89.65:33436.service - OpenSSH per-connection server daemon (139.178.89.65:33436). Aug 13 00:29:47.886959 sshd[9325]: Accepted publickey for core from 139.178.89.65 port 33436 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:29:47.890421 sshd[9325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:47.902049 systemd-logind[1570]: New session 96 of user core. Aug 13 00:29:47.910006 systemd[1]: Started session-96.scope - Session 96 of User core. Aug 13 00:29:48.750669 sshd[9325]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:48.761092 systemd[1]: sshd@100-138.201.175.117:22-139.178.89.65:33436.service: Deactivated successfully. Aug 13 00:29:48.770722 systemd[1]: session-96.scope: Deactivated successfully. Aug 13 00:29:48.774275 systemd-logind[1570]: Session 96 logged out. Waiting for processes to exit. Aug 13 00:29:48.776897 systemd-logind[1570]: Removed session 96. Aug 13 00:29:53.954507 systemd[1]: Started sshd@101-138.201.175.117:22-139.178.89.65:41824.service - OpenSSH per-connection server daemon (139.178.89.65:41824). Aug 13 00:29:55.129247 sshd[9340]: Accepted publickey for core from 139.178.89.65 port 41824 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:29:55.133311 sshd[9340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:29:55.156356 systemd-logind[1570]: New session 97 of user core. Aug 13 00:29:55.163925 systemd[1]: Started session-97.scope - Session 97 of User core. Aug 13 00:29:56.080781 sshd[9340]: pam_unix(sshd:session): session closed for user core Aug 13 00:29:56.090854 systemd-logind[1570]: Session 97 logged out. Waiting for processes to exit. Aug 13 00:29:56.091831 systemd[1]: sshd@101-138.201.175.117:22-139.178.89.65:41824.service: Deactivated successfully. Aug 13 00:29:56.105926 systemd[1]: session-97.scope: Deactivated successfully. Aug 13 00:29:56.114745 systemd-logind[1570]: Removed session 97. Aug 13 00:30:01.262969 systemd[1]: Started sshd@102-138.201.175.117:22-139.178.89.65:47424.service - OpenSSH per-connection server daemon (139.178.89.65:47424). Aug 13 00:30:02.377940 sshd[9378]: Accepted publickey for core from 139.178.89.65 port 47424 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:30:02.382610 sshd[9378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:30:02.398621 systemd-logind[1570]: New session 98 of user core. Aug 13 00:30:02.404920 systemd[1]: Started session-98.scope - Session 98 of User core. Aug 13 00:30:03.356130 sshd[9378]: pam_unix(sshd:session): session closed for user core Aug 13 00:30:03.373484 systemd[1]: sshd@102-138.201.175.117:22-139.178.89.65:47424.service: Deactivated successfully. Aug 13 00:30:03.382674 systemd[1]: session-98.scope: Deactivated successfully. Aug 13 00:30:03.389290 systemd-logind[1570]: Session 98 logged out. Waiting for processes to exit. Aug 13 00:30:03.391716 systemd-logind[1570]: Removed session 98. Aug 13 00:30:08.545842 systemd[1]: Started sshd@103-138.201.175.117:22-139.178.89.65:47432.service - OpenSSH per-connection server daemon (139.178.89.65:47432). Aug 13 00:30:09.646847 sshd[9414]: Accepted publickey for core from 139.178.89.65 port 47432 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:30:09.651472 sshd[9414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:30:09.670441 systemd-logind[1570]: New session 99 of user core. Aug 13 00:30:09.676775 systemd[1]: Started session-99.scope - Session 99 of User core. Aug 13 00:30:10.548892 systemd[1]: run-containerd-runc-k8s.io-3504e4edb445a952a747f178773c8b546f0525ac6180268a2a0f45b1dbf5b4f7-runc.zMqcs0.mount: Deactivated successfully. Aug 13 00:30:10.806124 sshd[9414]: pam_unix(sshd:session): session closed for user core Aug 13 00:30:10.829372 systemd[1]: sshd@103-138.201.175.117:22-139.178.89.65:47432.service: Deactivated successfully. Aug 13 00:30:10.854143 systemd[1]: session-99.scope: Deactivated successfully. Aug 13 00:30:10.856343 systemd-logind[1570]: Session 99 logged out. Waiting for processes to exit. Aug 13 00:30:10.868636 systemd-logind[1570]: Removed session 99. Aug 13 00:30:12.385550 systemd[1]: Started sshd@104-138.201.175.117:22-45.88.8.186:52210.service - OpenSSH per-connection server daemon (45.88.8.186:52210). Aug 13 00:30:13.945340 sshd[9465]: Connection closed by authenticating user root 45.88.8.186 port 52210 [preauth] Aug 13 00:30:13.958639 systemd[1]: sshd@104-138.201.175.117:22-45.88.8.186:52210.service: Deactivated successfully. Aug 13 00:30:15.987795 systemd[1]: Started sshd@105-138.201.175.117:22-139.178.89.65:57950.service - OpenSSH per-connection server daemon (139.178.89.65:57950). Aug 13 00:30:17.097340 sshd[9471]: Accepted publickey for core from 139.178.89.65 port 57950 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:30:17.103078 sshd[9471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:30:17.122017 systemd-logind[1570]: New session 100 of user core. Aug 13 00:30:17.130954 systemd[1]: Started session-100.scope - Session 100 of User core. Aug 13 00:30:18.065248 sshd[9471]: pam_unix(sshd:session): session closed for user core Aug 13 00:30:18.072895 systemd-logind[1570]: Session 100 logged out. Waiting for processes to exit. Aug 13 00:30:18.074790 systemd[1]: sshd@105-138.201.175.117:22-139.178.89.65:57950.service: Deactivated successfully. Aug 13 00:30:18.098008 systemd[1]: session-100.scope: Deactivated successfully. Aug 13 00:30:18.102766 systemd-logind[1570]: Removed session 100. Aug 13 00:30:21.778409 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Aug 13 00:30:21.841113 systemd-tmpfiles[9501]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:30:21.844265 systemd-tmpfiles[9501]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:30:21.846842 systemd-tmpfiles[9501]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:30:21.847912 systemd-tmpfiles[9501]: ACLs are not supported, ignoring. Aug 13 00:30:21.848065 systemd-tmpfiles[9501]: ACLs are not supported, ignoring. Aug 13 00:30:21.855660 systemd-tmpfiles[9501]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:30:21.855692 systemd-tmpfiles[9501]: Skipping /boot Aug 13 00:30:21.871857 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Aug 13 00:30:21.872880 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Aug 13 00:30:23.228944 systemd[1]: Started sshd@106-138.201.175.117:22-139.178.89.65:35816.service - OpenSSH per-connection server daemon (139.178.89.65:35816). Aug 13 00:30:24.262568 sshd[9508]: Accepted publickey for core from 139.178.89.65 port 35816 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:30:24.265960 sshd[9508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:30:24.276737 systemd-logind[1570]: New session 101 of user core. Aug 13 00:30:24.284940 systemd[1]: Started session-101.scope - Session 101 of User core. Aug 13 00:30:25.131676 sshd[9508]: pam_unix(sshd:session): session closed for user core Aug 13 00:30:25.141902 systemd[1]: sshd@106-138.201.175.117:22-139.178.89.65:35816.service: Deactivated successfully. Aug 13 00:30:25.150101 systemd[1]: session-101.scope: Deactivated successfully. Aug 13 00:30:25.151839 systemd-logind[1570]: Session 101 logged out. Waiting for processes to exit. Aug 13 00:30:25.154795 systemd-logind[1570]: Removed session 101. Aug 13 00:30:30.333333 systemd[1]: Started sshd@107-138.201.175.117:22-139.178.89.65:59506.service - OpenSSH per-connection server daemon (139.178.89.65:59506). Aug 13 00:30:31.445239 sshd[9552]: Accepted publickey for core from 139.178.89.65 port 59506 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:30:31.449460 sshd[9552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:30:31.462679 systemd-logind[1570]: New session 102 of user core. Aug 13 00:30:31.467939 systemd[1]: Started session-102.scope - Session 102 of User core. Aug 13 00:30:32.500624 sshd[9552]: pam_unix(sshd:session): session closed for user core Aug 13 00:30:32.519509 systemd[1]: sshd@107-138.201.175.117:22-139.178.89.65:59506.service: Deactivated successfully. Aug 13 00:30:32.534570 systemd[1]: session-102.scope: Deactivated successfully. Aug 13 00:30:32.539051 systemd-logind[1570]: Session 102 logged out. Waiting for processes to exit. Aug 13 00:30:32.543316 systemd-logind[1570]: Removed session 102. Aug 13 00:30:37.662857 systemd[1]: Started sshd@108-138.201.175.117:22-139.178.89.65:59508.service - OpenSSH per-connection server daemon (139.178.89.65:59508). Aug 13 00:30:38.728834 sshd[9581]: Accepted publickey for core from 139.178.89.65 port 59508 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:30:38.732816 sshd[9581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:30:38.745449 systemd-logind[1570]: New session 103 of user core. Aug 13 00:30:38.756669 systemd[1]: Started session-103.scope - Session 103 of User core. Aug 13 00:30:39.681656 sshd[9581]: pam_unix(sshd:session): session closed for user core Aug 13 00:30:39.692900 systemd[1]: sshd@108-138.201.175.117:22-139.178.89.65:59508.service: Deactivated successfully. Aug 13 00:30:39.706025 systemd[1]: session-103.scope: Deactivated successfully. Aug 13 00:30:39.714110 systemd-logind[1570]: Session 103 logged out. Waiting for processes to exit. Aug 13 00:30:39.718124 systemd-logind[1570]: Removed session 103. Aug 13 00:30:44.855171 systemd[1]: Started sshd@109-138.201.175.117:22-139.178.89.65:43702.service - OpenSSH per-connection server daemon (139.178.89.65:43702). Aug 13 00:30:45.885267 sshd[9635]: Accepted publickey for core from 139.178.89.65 port 43702 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:30:45.889503 sshd[9635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:30:45.901303 systemd-logind[1570]: New session 104 of user core. Aug 13 00:30:45.911843 systemd[1]: Started session-104.scope - Session 104 of User core. Aug 13 00:30:46.852631 sshd[9635]: pam_unix(sshd:session): session closed for user core Aug 13 00:30:46.873481 systemd[1]: sshd@109-138.201.175.117:22-139.178.89.65:43702.service: Deactivated successfully. Aug 13 00:30:46.892135 systemd[1]: session-104.scope: Deactivated successfully. Aug 13 00:30:46.900233 systemd-logind[1570]: Session 104 logged out. Waiting for processes to exit. Aug 13 00:30:46.903969 systemd-logind[1570]: Removed session 104. Aug 13 00:30:52.044372 systemd[1]: Started sshd@110-138.201.175.117:22-139.178.89.65:49716.service - OpenSSH per-connection server daemon (139.178.89.65:49716). Aug 13 00:30:53.132144 sshd[9651]: Accepted publickey for core from 139.178.89.65 port 49716 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:30:53.138170 sshd[9651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:30:53.160650 systemd-logind[1570]: New session 105 of user core. Aug 13 00:30:53.168805 systemd[1]: Started session-105.scope - Session 105 of User core. Aug 13 00:30:54.061027 sshd[9651]: pam_unix(sshd:session): session closed for user core Aug 13 00:30:54.070049 systemd[1]: sshd@110-138.201.175.117:22-139.178.89.65:49716.service: Deactivated successfully. Aug 13 00:30:54.078483 systemd[1]: session-105.scope: Deactivated successfully. Aug 13 00:30:54.080537 systemd-logind[1570]: Session 105 logged out. Waiting for processes to exit. Aug 13 00:30:54.083085 systemd-logind[1570]: Removed session 105. Aug 13 00:30:59.231798 systemd[1]: Started sshd@111-138.201.175.117:22-139.178.89.65:34390.service - OpenSSH per-connection server daemon (139.178.89.65:34390). Aug 13 00:30:59.338260 systemd[1]: run-containerd-runc-k8s.io-77c9a39643da9bf33856483de4796e772446ef9964b475fc8ed334de02b815d5-runc.ziLnTx.mount: Deactivated successfully. Aug 13 00:31:00.264478 sshd[9667]: Accepted publickey for core from 139.178.89.65 port 34390 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:00.268101 sshd[9667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:00.282280 systemd-logind[1570]: New session 106 of user core. Aug 13 00:31:00.287895 systemd[1]: Started session-106.scope - Session 106 of User core. Aug 13 00:31:01.258794 sshd[9667]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:01.272759 systemd-logind[1570]: Session 106 logged out. Waiting for processes to exit. Aug 13 00:31:01.275714 systemd[1]: sshd@111-138.201.175.117:22-139.178.89.65:34390.service: Deactivated successfully. Aug 13 00:31:01.291614 systemd[1]: session-106.scope: Deactivated successfully. Aug 13 00:31:01.301929 systemd-logind[1570]: Removed session 106. Aug 13 00:31:06.433498 systemd[1]: Started sshd@112-138.201.175.117:22-139.178.89.65:34402.service - OpenSSH per-connection server daemon (139.178.89.65:34402). Aug 13 00:31:07.470458 sshd[9724]: Accepted publickey for core from 139.178.89.65 port 34402 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:07.472469 sshd[9724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:07.488576 systemd-logind[1570]: New session 107 of user core. Aug 13 00:31:07.499894 systemd[1]: Started session-107.scope - Session 107 of User core. Aug 13 00:31:08.423293 sshd[9724]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:08.435104 systemd[1]: sshd@112-138.201.175.117:22-139.178.89.65:34402.service: Deactivated successfully. Aug 13 00:31:08.449880 systemd[1]: session-107.scope: Deactivated successfully. Aug 13 00:31:08.454871 systemd-logind[1570]: Session 107 logged out. Waiting for processes to exit. Aug 13 00:31:08.457534 systemd-logind[1570]: Removed session 107. Aug 13 00:31:13.625782 systemd[1]: Started sshd@113-138.201.175.117:22-139.178.89.65:43896.service - OpenSSH per-connection server daemon (139.178.89.65:43896). Aug 13 00:31:14.751147 sshd[9777]: Accepted publickey for core from 139.178.89.65 port 43896 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:14.755720 sshd[9777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:14.768717 systemd-logind[1570]: New session 108 of user core. Aug 13 00:31:14.778056 systemd[1]: Started session-108.scope - Session 108 of User core. Aug 13 00:31:15.728101 sshd[9777]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:15.749379 systemd[1]: sshd@113-138.201.175.117:22-139.178.89.65:43896.service: Deactivated successfully. Aug 13 00:31:15.773586 systemd[1]: session-108.scope: Deactivated successfully. Aug 13 00:31:15.779416 systemd-logind[1570]: Session 108 logged out. Waiting for processes to exit. Aug 13 00:31:15.785713 systemd-logind[1570]: Removed session 108. Aug 13 00:31:20.894838 systemd[1]: Started sshd@114-138.201.175.117:22-139.178.89.65:43182.service - OpenSSH per-connection server daemon (139.178.89.65:43182). Aug 13 00:31:21.951246 sshd[9791]: Accepted publickey for core from 139.178.89.65 port 43182 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:21.954826 sshd[9791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:21.982462 systemd-logind[1570]: New session 109 of user core. Aug 13 00:31:22.169026 systemd[1]: Started session-109.scope - Session 109 of User core. Aug 13 00:31:22.946660 sshd[9791]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:22.954880 systemd[1]: sshd@114-138.201.175.117:22-139.178.89.65:43182.service: Deactivated successfully. Aug 13 00:31:22.969978 systemd[1]: session-109.scope: Deactivated successfully. Aug 13 00:31:22.977956 systemd-logind[1570]: Session 109 logged out. Waiting for processes to exit. Aug 13 00:31:22.980797 systemd-logind[1570]: Removed session 109. Aug 13 00:31:28.119964 systemd[1]: Started sshd@115-138.201.175.117:22-139.178.89.65:43194.service - OpenSSH per-connection server daemon (139.178.89.65:43194). Aug 13 00:31:29.139881 sshd[9827]: Accepted publickey for core from 139.178.89.65 port 43194 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:29.143196 sshd[9827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:29.156304 systemd-logind[1570]: New session 110 of user core. Aug 13 00:31:29.167439 systemd[1]: Started session-110.scope - Session 110 of User core. Aug 13 00:31:30.003834 sshd[9827]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:30.022145 systemd[1]: sshd@115-138.201.175.117:22-139.178.89.65:43194.service: Deactivated successfully. Aug 13 00:31:30.031799 systemd[1]: session-110.scope: Deactivated successfully. Aug 13 00:31:30.035119 systemd-logind[1570]: Session 110 logged out. Waiting for processes to exit. Aug 13 00:31:30.039342 systemd-logind[1570]: Removed session 110. Aug 13 00:31:35.174809 systemd[1]: Started sshd@116-138.201.175.117:22-139.178.89.65:40540.service - OpenSSH per-connection server daemon (139.178.89.65:40540). Aug 13 00:31:36.254439 sshd[9864]: Accepted publickey for core from 139.178.89.65 port 40540 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:36.259499 sshd[9864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:36.274787 systemd-logind[1570]: New session 111 of user core. Aug 13 00:31:36.280848 systemd[1]: Started session-111.scope - Session 111 of User core. Aug 13 00:31:37.197609 sshd[9864]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:37.218658 systemd[1]: sshd@116-138.201.175.117:22-139.178.89.65:40540.service: Deactivated successfully. Aug 13 00:31:37.222239 systemd-logind[1570]: Session 111 logged out. Waiting for processes to exit. Aug 13 00:31:37.238341 systemd[1]: session-111.scope: Deactivated successfully. Aug 13 00:31:37.245802 systemd-logind[1570]: Removed session 111. Aug 13 00:31:42.385918 systemd[1]: Started sshd@117-138.201.175.117:22-139.178.89.65:35246.service - OpenSSH per-connection server daemon (139.178.89.65:35246). Aug 13 00:31:43.483096 sshd[9925]: Accepted publickey for core from 139.178.89.65 port 35246 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:43.488467 sshd[9925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:43.507332 systemd-logind[1570]: New session 112 of user core. Aug 13 00:31:43.518708 systemd[1]: Started session-112.scope - Session 112 of User core. Aug 13 00:31:44.426981 sshd[9925]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:44.444785 systemd[1]: sshd@117-138.201.175.117:22-139.178.89.65:35246.service: Deactivated successfully. Aug 13 00:31:44.456776 systemd-logind[1570]: Session 112 logged out. Waiting for processes to exit. Aug 13 00:31:44.457559 systemd[1]: session-112.scope: Deactivated successfully. Aug 13 00:31:44.464652 systemd-logind[1570]: Removed session 112. Aug 13 00:31:49.599095 systemd[1]: Started sshd@118-138.201.175.117:22-139.178.89.65:46176.service - OpenSSH per-connection server daemon (139.178.89.65:46176). Aug 13 00:31:50.896520 sshd[9940]: Accepted publickey for core from 139.178.89.65 port 46176 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:50.900448 sshd[9940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:50.912935 systemd-logind[1570]: New session 113 of user core. Aug 13 00:31:50.923935 systemd[1]: Started session-113.scope - Session 113 of User core. Aug 13 00:31:51.841637 sshd[9940]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:51.862323 systemd[1]: sshd@118-138.201.175.117:22-139.178.89.65:46176.service: Deactivated successfully. Aug 13 00:31:51.872847 systemd[1]: session-113.scope: Deactivated successfully. Aug 13 00:31:51.878148 systemd-logind[1570]: Session 113 logged out. Waiting for processes to exit. Aug 13 00:31:51.881780 systemd-logind[1570]: Removed session 113. Aug 13 00:31:57.034341 systemd[1]: Started sshd@119-138.201.175.117:22-139.178.89.65:46180.service - OpenSSH per-connection server daemon (139.178.89.65:46180). Aug 13 00:31:58.140804 sshd[9959]: Accepted publickey for core from 139.178.89.65 port 46180 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:31:58.144458 sshd[9959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:31:58.158386 systemd-logind[1570]: New session 114 of user core. Aug 13 00:31:58.166972 systemd[1]: Started session-114.scope - Session 114 of User core. Aug 13 00:31:59.101614 sshd[9959]: pam_unix(sshd:session): session closed for user core Aug 13 00:31:59.119645 systemd-logind[1570]: Session 114 logged out. Waiting for processes to exit. Aug 13 00:31:59.123162 systemd[1]: sshd@119-138.201.175.117:22-139.178.89.65:46180.service: Deactivated successfully. Aug 13 00:31:59.146964 systemd[1]: session-114.scope: Deactivated successfully. Aug 13 00:31:59.150770 systemd-logind[1570]: Removed session 114. Aug 13 00:32:04.262826 systemd[1]: Started sshd@120-138.201.175.117:22-139.178.89.65:50748.service - OpenSSH per-connection server daemon (139.178.89.65:50748). Aug 13 00:32:05.300559 sshd[10002]: Accepted publickey for core from 139.178.89.65 port 50748 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:32:05.303965 sshd[10002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:32:05.317919 systemd-logind[1570]: New session 115 of user core. Aug 13 00:32:05.333943 systemd[1]: Started session-115.scope - Session 115 of User core. Aug 13 00:32:06.211718 sshd[10002]: pam_unix(sshd:session): session closed for user core Aug 13 00:32:06.221655 systemd[1]: sshd@120-138.201.175.117:22-139.178.89.65:50748.service: Deactivated successfully. Aug 13 00:32:06.232005 systemd[1]: session-115.scope: Deactivated successfully. Aug 13 00:32:06.234761 systemd-logind[1570]: Session 115 logged out. Waiting for processes to exit. Aug 13 00:32:06.238430 systemd-logind[1570]: Removed session 115. Aug 13 00:32:11.398826 systemd[1]: Started sshd@121-138.201.175.117:22-139.178.89.65:37024.service - OpenSSH per-connection server daemon (139.178.89.65:37024). Aug 13 00:32:12.482379 sshd[10090]: Accepted publickey for core from 139.178.89.65 port 37024 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:32:12.484474 sshd[10090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:32:12.506537 systemd-logind[1570]: New session 116 of user core. Aug 13 00:32:12.514257 systemd[1]: Started session-116.scope - Session 116 of User core. Aug 13 00:32:13.493402 sshd[10090]: pam_unix(sshd:session): session closed for user core Aug 13 00:32:13.506733 systemd-logind[1570]: Session 116 logged out. Waiting for processes to exit. Aug 13 00:32:13.510179 systemd[1]: sshd@121-138.201.175.117:22-139.178.89.65:37024.service: Deactivated successfully. Aug 13 00:32:13.524695 systemd[1]: session-116.scope: Deactivated successfully. Aug 13 00:32:13.530178 systemd-logind[1570]: Removed session 116. Aug 13 00:32:18.673724 systemd[1]: Started sshd@122-138.201.175.117:22-139.178.89.65:37038.service - OpenSSH per-connection server daemon (139.178.89.65:37038). Aug 13 00:32:19.749658 sshd[10104]: Accepted publickey for core from 139.178.89.65 port 37038 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:32:19.753600 sshd[10104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:32:19.780304 systemd-logind[1570]: New session 117 of user core. Aug 13 00:32:19.782868 systemd[1]: Started session-117.scope - Session 117 of User core. Aug 13 00:32:20.741463 sshd[10104]: pam_unix(sshd:session): session closed for user core Aug 13 00:32:20.757183 systemd-logind[1570]: Session 117 logged out. Waiting for processes to exit. Aug 13 00:32:20.757439 systemd[1]: sshd@122-138.201.175.117:22-139.178.89.65:37038.service: Deactivated successfully. Aug 13 00:32:20.771319 systemd[1]: session-117.scope: Deactivated successfully. Aug 13 00:32:20.780216 systemd-logind[1570]: Removed session 117. Aug 13 00:32:24.899026 systemd[1]: Started sshd@123-138.201.175.117:22-167.99.149.55:29370.service - OpenSSH per-connection server daemon (167.99.149.55:29370). Aug 13 00:32:25.108990 sshd[10137]: Connection reset by 167.99.149.55 port 29370 [preauth] Aug 13 00:32:25.116717 systemd[1]: sshd@123-138.201.175.117:22-167.99.149.55:29370.service: Deactivated successfully. Aug 13 00:32:25.918801 systemd[1]: Started sshd@124-138.201.175.117:22-139.178.89.65:38138.service - OpenSSH per-connection server daemon (139.178.89.65:38138). Aug 13 00:32:26.988561 sshd[10142]: Accepted publickey for core from 139.178.89.65 port 38138 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:32:26.991350 sshd[10142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:32:27.012392 systemd-logind[1570]: New session 118 of user core. Aug 13 00:32:27.022404 systemd[1]: Started session-118.scope - Session 118 of User core. Aug 13 00:32:27.947632 sshd[10142]: pam_unix(sshd:session): session closed for user core Aug 13 00:32:27.970733 systemd[1]: sshd@124-138.201.175.117:22-139.178.89.65:38138.service: Deactivated successfully. Aug 13 00:32:27.971749 systemd-logind[1570]: Session 118 logged out. Waiting for processes to exit. Aug 13 00:32:27.987579 systemd[1]: session-118.scope: Deactivated successfully. Aug 13 00:32:27.997704 systemd-logind[1570]: Removed session 118. Aug 13 00:32:33.122870 systemd[1]: Started sshd@125-138.201.175.117:22-139.178.89.65:51584.service - OpenSSH per-connection server daemon (139.178.89.65:51584). Aug 13 00:32:34.167327 sshd[10180]: Accepted publickey for core from 139.178.89.65 port 51584 ssh2: RSA SHA256:TbpwDUqnmmr/6oeFI65A/iU5DlmHGueKflwEEvdqHG0 Aug 13 00:32:34.170889 sshd[10180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:32:34.191468 systemd-logind[1570]: New session 119 of user core. Aug 13 00:32:34.197957 systemd[1]: Started session-119.scope - Session 119 of User core. Aug 13 00:32:35.084184 sshd[10180]: pam_unix(sshd:session): session closed for user core Aug 13 00:32:35.100568 systemd[1]: sshd@125-138.201.175.117:22-139.178.89.65:51584.service: Deactivated successfully. Aug 13 00:32:35.123403 systemd[1]: session-119.scope: Deactivated successfully. Aug 13 00:32:35.131166 systemd-logind[1570]: Session 119 logged out. Waiting for processes to exit. Aug 13 00:32:35.137292 systemd-logind[1570]: Removed session 119. Aug 13 00:33:07.029541 containerd[1592]: time="2025-08-13T00:33:07.029421969Z" level=info msg="shim disconnected" id=d98a0e068533b4d82d2e1b45c9cc7819abab60d6dc5f15ac8762508901a813e4 namespace=k8s.io Aug 13 00:33:07.029541 containerd[1592]: time="2025-08-13T00:33:07.029531207Z" level=warning msg="cleaning up after shim disconnected" id=d98a0e068533b4d82d2e1b45c9cc7819abab60d6dc5f15ac8762508901a813e4 namespace=k8s.io Aug 13 00:33:07.035420 containerd[1592]: time="2025-08-13T00:33:07.029561406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:33:07.039896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d98a0e068533b4d82d2e1b45c9cc7819abab60d6dc5f15ac8762508901a813e4-rootfs.mount: Deactivated successfully. Aug 13 00:33:07.179845 containerd[1592]: time="2025-08-13T00:33:07.179340589Z" level=info msg="shim disconnected" id=479936b9cff5b71946c78930edcf573377977b51ca9b45e8382ab2f71813f28d namespace=k8s.io Aug 13 00:33:07.179845 containerd[1592]: time="2025-08-13T00:33:07.179518586Z" level=warning msg="cleaning up after shim disconnected" id=479936b9cff5b71946c78930edcf573377977b51ca9b45e8382ab2f71813f28d namespace=k8s.io Aug 13 00:33:07.179845 containerd[1592]: time="2025-08-13T00:33:07.179556425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:33:07.192719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-479936b9cff5b71946c78930edcf573377977b51ca9b45e8382ab2f71813f28d-rootfs.mount: Deactivated successfully. Aug 13 00:33:07.204959 kubelet[2741]: E0813 00:33:07.201495 2741 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40972->10.0.0.2:2379: read: connection timed out" Aug 13 00:33:07.276841 containerd[1592]: time="2025-08-13T00:33:07.276699678Z" level=info msg="shim disconnected" id=623b74f3d669072dec7eae00464c0a21fd5afc9439176f5ba989f65ec8211703 namespace=k8s.io Aug 13 00:33:07.276841 containerd[1592]: time="2025-08-13T00:33:07.276834396Z" level=warning msg="cleaning up after shim disconnected" id=623b74f3d669072dec7eae00464c0a21fd5afc9439176f5ba989f65ec8211703 namespace=k8s.io Aug 13 00:33:07.276841 containerd[1592]: time="2025-08-13T00:33:07.276860595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:33:07.281506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-623b74f3d669072dec7eae00464c0a21fd5afc9439176f5ba989f65ec8211703-rootfs.mount: Deactivated successfully. Aug 13 00:33:07.962533 kubelet[2741]: I0813 00:33:07.961881 2741 scope.go:117] "RemoveContainer" containerID="479936b9cff5b71946c78930edcf573377977b51ca9b45e8382ab2f71813f28d" Aug 13 00:33:07.981566 kubelet[2741]: I0813 00:33:07.980190 2741 scope.go:117] "RemoveContainer" containerID="d98a0e068533b4d82d2e1b45c9cc7819abab60d6dc5f15ac8762508901a813e4" Aug 13 00:33:07.981863 containerd[1592]: time="2025-08-13T00:33:07.981461744Z" level=info msg="CreateContainer within sandbox \"368b49ced91fe757d296057a2fa3023fd7a49a5f2371854687bad37cdfcf4bc5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 13 00:33:07.988307 kubelet[2741]: I0813 00:33:07.988251 2741 scope.go:117] "RemoveContainer" containerID="623b74f3d669072dec7eae00464c0a21fd5afc9439176f5ba989f65ec8211703" Aug 13 00:33:07.993875 containerd[1592]: time="2025-08-13T00:33:07.993363640Z" level=info msg="CreateContainer within sandbox \"c5406bf23184dc632aa1d3b9e13ce2a453547922315da953a653331b161c0b69\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:33:07.994853 containerd[1592]: time="2025-08-13T00:33:07.994775374Z" level=info msg="CreateContainer within sandbox \"2661af3b8d712686f3fe9378aa8b8f69a0fd4e1793541f5bead6a4ade999d7ad\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 00:33:08.044287 containerd[1592]: time="2025-08-13T00:33:08.044181211Z" level=info msg="CreateContainer within sandbox \"368b49ced91fe757d296057a2fa3023fd7a49a5f2371854687bad37cdfcf4bc5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"15be26e868afed5959b886784dd3ff1f74bcff0f0ec6c009fc8c6c0d553d94e3\"" Aug 13 00:33:08.057271 containerd[1592]: time="2025-08-13T00:33:08.056043390Z" level=info msg="StartContainer for \"15be26e868afed5959b886784dd3ff1f74bcff0f0ec6c009fc8c6c0d553d94e3\"" Aug 13 00:33:08.091026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151806179.mount: Deactivated successfully. Aug 13 00:33:08.124757 containerd[1592]: time="2025-08-13T00:33:08.124630751Z" level=info msg="CreateContainer within sandbox \"2661af3b8d712686f3fe9378aa8b8f69a0fd4e1793541f5bead6a4ade999d7ad\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4c03920e291928b8a9bf71312e503be702abc66ebf5bea5660222a9bd8b9b939\"" Aug 13 00:33:08.126398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861221525.mount: Deactivated successfully. Aug 13 00:33:08.128319 containerd[1592]: time="2025-08-13T00:33:08.127572416Z" level=info msg="CreateContainer within sandbox \"c5406bf23184dc632aa1d3b9e13ce2a453547922315da953a653331b161c0b69\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8efafdc2d96c1164b2d86f94de73d293b155b35cc3393d0acbd4242de343ff19\"" Aug 13 00:33:08.131095 containerd[1592]: time="2025-08-13T00:33:08.131025192Z" level=info msg="StartContainer for \"8efafdc2d96c1164b2d86f94de73d293b155b35cc3393d0acbd4242de343ff19\"" Aug 13 00:33:08.131622 containerd[1592]: time="2025-08-13T00:33:08.131460503Z" level=info msg="StartContainer for \"4c03920e291928b8a9bf71312e503be702abc66ebf5bea5660222a9bd8b9b939\"" Aug 13 00:33:08.389312 containerd[1592]: time="2025-08-13T00:33:08.387181654Z" level=info msg="StartContainer for \"15be26e868afed5959b886784dd3ff1f74bcff0f0ec6c009fc8c6c0d553d94e3\" returns successfully" Aug 13 00:33:08.432883 containerd[1592]: time="2025-08-13T00:33:08.432477089Z" level=info msg="StartContainer for \"4c03920e291928b8a9bf71312e503be702abc66ebf5bea5660222a9bd8b9b939\" returns successfully" Aug 13 00:33:08.462254 containerd[1592]: time="2025-08-13T00:33:08.461963659Z" level=info msg="StartContainer for \"8efafdc2d96c1164b2d86f94de73d293b155b35cc3393d0acbd4242de343ff19\" returns successfully" Aug 13 00:33:09.001864 kubelet[2741]: E0813 00:33:08.986387 2741 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40788->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-0-684996fd0b.185b2c4a8cc7c349 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-0-684996fd0b,UID:54fc0ea4b93ab5b1231ab715c0904ff2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-0-684996fd0b,},FirstTimestamp:2025-08-13 00:32:58.492044105 +0000 UTC m=+969.759768709,LastTimestamp:2025-08-13 00:32:58.492044105 +0000 UTC m=+969.759768709,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-0-684996fd0b,}" Aug 13 00:33:14.916603 kubelet[2741]: I0813 00:33:14.916257 2741 status_manager.go:851] "Failed to get status for pod" podUID="54fc0ea4b93ab5b1231ab715c0904ff2" pod="kube-system/kube-apiserver-ci-4081-3-5-0-684996fd0b" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout"