Sep 13 00:25:57.038300 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:25:57.038339 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:25:57.038353 kernel: KASLR enabled Sep 13 00:25:57.038360 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Sep 13 00:25:57.038367 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Sep 13 00:25:57.038374 kernel: random: crng init done Sep 13 00:25:57.038382 kernel: ACPI: Early table checksum verification disabled Sep 13 00:25:57.038389 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Sep 13 00:25:57.038396 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:25:57.038406 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.038414 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.038421 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.038428 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.038435 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.038444 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.038452 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.038480 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.039326 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:25:57.039337 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:25:57.039343 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Sep 13 00:25:57.039351 kernel: NUMA: Failed to initialise from firmware Sep 13 00:25:57.039359 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:25:57.039365 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Sep 13 00:25:57.039372 kernel: Zone ranges: Sep 13 00:25:57.039378 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 13 00:25:57.039391 kernel: DMA32 empty Sep 13 00:25:57.039398 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Sep 13 00:25:57.039404 kernel: Movable zone start for each node Sep 13 00:25:57.039411 kernel: Early memory node ranges Sep 13 00:25:57.039417 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Sep 13 00:25:57.039423 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Sep 13 00:25:57.039430 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Sep 13 00:25:57.039436 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Sep 13 00:25:57.039443 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Sep 13 00:25:57.039450 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Sep 13 00:25:57.039480 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Sep 13 00:25:57.039488 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Sep 13 00:25:57.039498 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Sep 13 00:25:57.039505 kernel: psci: probing for conduit method from ACPI. Sep 13 00:25:57.039512 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:25:57.039522 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:25:57.039529 kernel: psci: Trusted OS migration not required Sep 13 00:25:57.039536 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:25:57.039544 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:25:57.039551 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:25:57.039558 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:25:57.039566 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 13 00:25:57.039572 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:25:57.039579 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:25:57.039586 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:25:57.039593 kernel: CPU features: detected: Spectre-v4 Sep 13 00:25:57.039600 kernel: CPU features: detected: Spectre-BHB Sep 13 00:25:57.039607 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:25:57.039615 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:25:57.039622 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:25:57.039629 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:25:57.039636 kernel: alternatives: applying boot alternatives Sep 13 00:25:57.039645 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:25:57.039652 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:25:57.039659 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:25:57.039666 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:25:57.039673 kernel: Fallback order for Node 0: 0 Sep 13 00:25:57.039680 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Sep 13 00:25:57.039687 kernel: Policy zone: Normal Sep 13 00:25:57.039696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:25:57.039703 kernel: software IO TLB: area num 2. Sep 13 00:25:57.039710 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Sep 13 00:25:57.039718 kernel: Memory: 3882740K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 213260K reserved, 0K cma-reserved) Sep 13 00:25:57.039725 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:25:57.039732 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:25:57.039740 kernel: rcu: RCU event tracing is enabled. Sep 13 00:25:57.039747 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:25:57.039754 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:25:57.039761 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:25:57.039768 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:25:57.039777 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:25:57.039784 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:25:57.039790 kernel: GICv3: 256 SPIs implemented Sep 13 00:25:57.039797 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:25:57.039804 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:25:57.039811 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:25:57.039818 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:25:57.039825 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:25:57.039832 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:25:57.039839 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:25:57.039846 kernel: GICv3: using LPI property table @0x00000001000e0000 Sep 13 00:25:57.039853 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Sep 13 00:25:57.039862 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:25:57.039869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:25:57.039876 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:25:57.039883 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:25:57.039890 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:25:57.039897 kernel: Console: colour dummy device 80x25 Sep 13 00:25:57.039905 kernel: ACPI: Core revision 20230628 Sep 13 00:25:57.039912 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:25:57.039919 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:25:57.039927 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:25:57.039936 kernel: landlock: Up and running. Sep 13 00:25:57.039943 kernel: SELinux: Initializing. Sep 13 00:25:57.039950 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:25:57.039958 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:25:57.039965 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:25:57.039973 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:25:57.039980 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:25:57.039987 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:25:57.039995 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:25:57.040004 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:25:57.040011 kernel: Remapping and enabling EFI services. Sep 13 00:25:57.040018 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:25:57.040025 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:25:57.040032 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:25:57.040039 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Sep 13 00:25:57.040046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:25:57.040053 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:25:57.040060 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:25:57.040067 kernel: SMP: Total of 2 processors activated. Sep 13 00:25:57.040077 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:25:57.040085 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:25:57.040111 kernel: CPU features: detected: Common not Private translations Sep 13 00:25:57.040126 kernel: CPU features: detected: CRC32 instructions Sep 13 00:25:57.040133 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:25:57.040141 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:25:57.040149 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:25:57.040157 kernel: CPU features: detected: Privileged Access Never Sep 13 00:25:57.040165 kernel: CPU features: detected: RAS Extension Support Sep 13 00:25:57.040174 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:25:57.040182 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:25:57.040190 kernel: alternatives: applying system-wide alternatives Sep 13 00:25:57.040197 kernel: devtmpfs: initialized Sep 13 00:25:57.040205 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:25:57.040213 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:25:57.040220 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:25:57.040229 kernel: SMBIOS 3.0.0 present. Sep 13 00:25:57.040237 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Sep 13 00:25:57.040244 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:25:57.040252 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:25:57.040259 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:25:57.040267 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:25:57.040274 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:25:57.040281 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Sep 13 00:25:57.040289 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:25:57.040298 kernel: cpuidle: using governor menu Sep 13 00:25:57.040306 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:25:57.040314 kernel: ASID allocator initialised with 32768 entries Sep 13 00:25:57.040321 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:25:57.040329 kernel: Serial: AMBA PL011 UART driver Sep 13 00:25:57.040336 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:25:57.040344 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:25:57.040352 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:25:57.040359 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:25:57.040369 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:25:57.040376 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:25:57.040384 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:25:57.040391 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:25:57.040399 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:25:57.040406 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:25:57.040414 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:25:57.040421 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:25:57.040429 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:25:57.040438 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:25:57.040445 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:25:57.040551 kernel: ACPI: Interpreter enabled Sep 13 00:25:57.040565 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:25:57.040572 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:25:57.040581 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:25:57.040588 kernel: printk: console [ttyAMA0] enabled Sep 13 00:25:57.040596 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:25:57.040811 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:25:57.040897 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:25:57.040968 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:25:57.041035 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:25:57.041144 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:25:57.041156 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:25:57.041164 kernel: PCI host bridge to bus 0000:00 Sep 13 00:25:57.041252 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:25:57.041319 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:25:57.041511 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:25:57.041584 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:25:57.041702 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:25:57.041795 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Sep 13 00:25:57.041866 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Sep 13 00:25:57.041959 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:25:57.042037 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.042175 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Sep 13 00:25:57.042274 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.042345 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Sep 13 00:25:57.042421 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.042542 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Sep 13 00:25:57.042632 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.042701 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Sep 13 00:25:57.042780 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.042848 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Sep 13 00:25:57.042932 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.043006 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Sep 13 00:25:57.043085 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.043193 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Sep 13 00:25:57.043271 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.043352 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Sep 13 00:25:57.043445 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Sep 13 00:25:57.043543 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Sep 13 00:25:57.043626 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Sep 13 00:25:57.043693 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Sep 13 00:25:57.043773 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:25:57.043844 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Sep 13 00:25:57.043926 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:25:57.044012 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:25:57.044125 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 00:25:57.044215 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Sep 13 00:25:57.044295 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Sep 13 00:25:57.044366 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Sep 13 00:25:57.044438 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Sep 13 00:25:57.044735 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Sep 13 00:25:57.044814 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Sep 13 00:25:57.044930 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 00:25:57.045017 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Sep 13 00:25:57.045178 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Sep 13 00:25:57.045262 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Sep 13 00:25:57.045334 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:25:57.045432 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Sep 13 00:25:57.045558 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Sep 13 00:25:57.045632 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Sep 13 00:25:57.045706 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Sep 13 00:25:57.045784 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Sep 13 00:25:57.045869 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:25:57.045954 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Sep 13 00:25:57.046056 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Sep 13 00:25:57.046164 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Sep 13 00:25:57.046249 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Sep 13 00:25:57.046338 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 13 00:25:57.046420 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:25:57.048858 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Sep 13 00:25:57.048973 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 13 00:25:57.049040 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Sep 13 00:25:57.049146 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Sep 13 00:25:57.049232 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 13 00:25:57.049302 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Sep 13 00:25:57.049369 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Sep 13 00:25:57.049443 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 00:25:57.050413 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:25:57.050534 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Sep 13 00:25:57.050624 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 00:25:57.050690 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:25:57.050756 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Sep 13 00:25:57.050832 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 00:25:57.050908 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:25:57.050990 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Sep 13 00:25:57.051062 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 00:25:57.051157 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:25:57.051254 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Sep 13 00:25:57.051732 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Sep 13 00:25:57.051840 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:25:57.051932 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Sep 13 00:25:57.052009 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:25:57.052086 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Sep 13 00:25:57.052255 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:25:57.052357 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Sep 13 00:25:57.052443 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:25:57.052665 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Sep 13 00:25:57.052758 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:25:57.052836 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Sep 13 00:25:57.052906 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:25:57.052986 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Sep 13 00:25:57.053057 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:25:57.053167 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Sep 13 00:25:57.053251 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:25:57.053327 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Sep 13 00:25:57.053395 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:25:57.053488 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Sep 13 00:25:57.053582 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Sep 13 00:25:57.054668 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Sep 13 00:25:57.054763 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 00:25:57.054834 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Sep 13 00:25:57.054899 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 00:25:57.054971 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Sep 13 00:25:57.055037 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 00:25:57.055180 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Sep 13 00:25:57.055271 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 13 00:25:57.055344 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Sep 13 00:25:57.055411 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 13 00:25:57.055503 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Sep 13 00:25:57.055578 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 13 00:25:57.055658 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Sep 13 00:25:57.055726 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 13 00:25:57.055797 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Sep 13 00:25:57.055872 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 13 00:25:57.055945 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Sep 13 00:25:57.056014 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Sep 13 00:25:57.056092 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Sep 13 00:25:57.056198 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Sep 13 00:25:57.056271 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:25:57.056341 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Sep 13 00:25:57.056411 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Sep 13 00:25:57.057491 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 13 00:25:57.057636 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Sep 13 00:25:57.057714 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:25:57.057805 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Sep 13 00:25:57.057890 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Sep 13 00:25:57.057983 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 13 00:25:57.058058 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Sep 13 00:25:57.058165 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:25:57.058250 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Sep 13 00:25:57.058323 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Sep 13 00:25:57.058396 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Sep 13 00:25:57.058515 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 13 00:25:57.058605 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Sep 13 00:25:57.058677 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:25:57.058761 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Sep 13 00:25:57.058836 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Sep 13 00:25:57.058909 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 13 00:25:57.058980 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Sep 13 00:25:57.059048 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:25:57.059153 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Sep 13 00:25:57.059241 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Sep 13 00:25:57.059311 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 13 00:25:57.059381 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Sep 13 00:25:57.059451 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:25:57.059647 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Sep 13 00:25:57.059732 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Sep 13 00:25:57.059813 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Sep 13 00:25:57.059888 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 13 00:25:57.059977 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Sep 13 00:25:57.060052 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:25:57.060205 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Sep 13 00:25:57.060300 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Sep 13 00:25:57.060382 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Sep 13 00:25:57.060562 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Sep 13 00:25:57.060653 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 13 00:25:57.060723 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Sep 13 00:25:57.060807 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:25:57.060885 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Sep 13 00:25:57.060957 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 13 00:25:57.061033 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Sep 13 00:25:57.061137 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:25:57.061228 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Sep 13 00:25:57.061296 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Sep 13 00:25:57.061363 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Sep 13 00:25:57.061441 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:25:57.061568 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:25:57.061646 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:25:57.061718 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:25:57.061809 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 13 00:25:57.061886 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Sep 13 00:25:57.061964 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Sep 13 00:25:57.062053 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Sep 13 00:25:57.062142 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Sep 13 00:25:57.062213 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Sep 13 00:25:57.062295 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Sep 13 00:25:57.062368 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Sep 13 00:25:57.062445 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Sep 13 00:25:57.062622 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 13 00:25:57.062694 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Sep 13 00:25:57.062776 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Sep 13 00:25:57.062866 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Sep 13 00:25:57.062946 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Sep 13 00:25:57.063021 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Sep 13 00:25:57.063156 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Sep 13 00:25:57.063253 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Sep 13 00:25:57.063318 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Sep 13 00:25:57.063405 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Sep 13 00:25:57.063526 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Sep 13 00:25:57.063608 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Sep 13 00:25:57.063688 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Sep 13 00:25:57.063759 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Sep 13 00:25:57.063834 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Sep 13 00:25:57.063924 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Sep 13 00:25:57.064003 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Sep 13 00:25:57.064069 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Sep 13 00:25:57.064082 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:25:57.064090 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:25:57.064109 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:25:57.064119 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:25:57.064127 kernel: iommu: Default domain type: Translated Sep 13 00:25:57.064135 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:25:57.064143 kernel: efivars: Registered efivars operations Sep 13 00:25:57.064151 kernel: vgaarb: loaded Sep 13 00:25:57.064161 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:25:57.064172 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:25:57.064181 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:25:57.064189 kernel: pnp: PnP ACPI init Sep 13 00:25:57.064284 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:25:57.064296 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:25:57.064305 kernel: NET: Registered PF_INET protocol family Sep 13 00:25:57.064314 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:25:57.064324 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:25:57.064335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:25:57.064343 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:25:57.064351 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:25:57.064360 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:25:57.064368 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:25:57.064377 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:25:57.064386 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:25:57.064489 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Sep 13 00:25:57.064502 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:25:57.064514 kernel: kvm [1]: HYP mode not available Sep 13 00:25:57.064523 kernel: Initialise system trusted keyrings Sep 13 00:25:57.064532 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:25:57.064541 kernel: Key type asymmetric registered Sep 13 00:25:57.064549 kernel: Asymmetric key parser 'x509' registered Sep 13 00:25:57.064558 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:25:57.064566 kernel: io scheduler mq-deadline registered Sep 13 00:25:57.064575 kernel: io scheduler kyber registered Sep 13 00:25:57.064584 kernel: io scheduler bfq registered Sep 13 00:25:57.064595 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 13 00:25:57.064681 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Sep 13 00:25:57.064758 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Sep 13 00:25:57.064827 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.064899 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Sep 13 00:25:57.064972 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Sep 13 00:25:57.065045 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.065134 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Sep 13 00:25:57.065209 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Sep 13 00:25:57.065284 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.065359 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Sep 13 00:25:57.065429 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Sep 13 00:25:57.065534 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.065609 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Sep 13 00:25:57.065680 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Sep 13 00:25:57.065749 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.065828 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Sep 13 00:25:57.065901 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Sep 13 00:25:57.065978 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.066054 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Sep 13 00:25:57.066176 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Sep 13 00:25:57.066257 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.066333 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Sep 13 00:25:57.066405 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Sep 13 00:25:57.066520 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.066535 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Sep 13 00:25:57.066608 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Sep 13 00:25:57.066678 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Sep 13 00:25:57.066752 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 00:25:57.066763 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:25:57.066771 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:25:57.066780 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:25:57.066861 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Sep 13 00:25:57.066942 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Sep 13 00:25:57.066956 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:25:57.066966 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 13 00:25:57.067048 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Sep 13 00:25:57.067060 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Sep 13 00:25:57.067068 kernel: thunder_xcv, ver 1.0 Sep 13 00:25:57.067076 kernel: thunder_bgx, ver 1.0 Sep 13 00:25:57.067086 kernel: nicpf, ver 1.0 Sep 13 00:25:57.067094 kernel: nicvf, ver 1.0 Sep 13 00:25:57.067203 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:25:57.067271 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:25:56 UTC (1757723156) Sep 13 00:25:57.067281 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:25:57.067290 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:25:57.067298 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:25:57.067306 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:25:57.067317 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:25:57.067325 kernel: Segment Routing with IPv6 Sep 13 00:25:57.067334 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:25:57.067342 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:25:57.067350 kernel: Key type dns_resolver registered Sep 13 00:25:57.067357 kernel: registered taskstats version 1 Sep 13 00:25:57.067366 kernel: Loading compiled-in X.509 certificates Sep 13 00:25:57.067374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:25:57.067382 kernel: Key type .fscrypt registered Sep 13 00:25:57.067392 kernel: Key type fscrypt-provisioning registered Sep 13 00:25:57.067400 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:25:57.067408 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:25:57.067417 kernel: ima: No architecture policies found Sep 13 00:25:57.067425 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:25:57.067433 kernel: clk: Disabling unused clocks Sep 13 00:25:57.067441 kernel: Freeing unused kernel memory: 39488K Sep 13 00:25:57.067449 kernel: Run /init as init process Sep 13 00:25:57.067470 kernel: with arguments: Sep 13 00:25:57.067482 kernel: /init Sep 13 00:25:57.067490 kernel: with environment: Sep 13 00:25:57.067498 kernel: HOME=/ Sep 13 00:25:57.067506 kernel: TERM=linux Sep 13 00:25:57.067513 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:25:57.067525 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:25:57.067536 systemd[1]: Detected virtualization kvm. Sep 13 00:25:57.067545 systemd[1]: Detected architecture arm64. Sep 13 00:25:57.067555 systemd[1]: Running in initrd. Sep 13 00:25:57.067563 systemd[1]: No hostname configured, using default hostname. Sep 13 00:25:57.067572 systemd[1]: Hostname set to . Sep 13 00:25:57.067581 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:25:57.067589 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:25:57.067599 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:25:57.067607 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:25:57.067616 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:25:57.067627 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:25:57.067635 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:25:57.067644 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:25:57.067655 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:25:57.067663 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:25:57.067675 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:25:57.067684 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:25:57.067696 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:25:57.067704 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:25:57.067729 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:25:57.067740 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:25:57.067749 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:25:57.067757 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:25:57.067766 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:25:57.067775 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:25:57.067786 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:25:57.067795 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:25:57.067803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:25:57.067812 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:25:57.067821 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:25:57.067829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:25:57.067839 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:25:57.067847 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:25:57.067856 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:25:57.067866 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:25:57.067875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:25:57.067884 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:25:57.067892 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:25:57.067901 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:25:57.067910 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:25:57.067951 systemd-journald[235]: Collecting audit messages is disabled. Sep 13 00:25:57.067974 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:25:57.067985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:25:57.067993 kernel: Bridge firewalling registered Sep 13 00:25:57.068002 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:25:57.068010 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:25:57.068020 systemd-journald[235]: Journal started Sep 13 00:25:57.068041 systemd-journald[235]: Runtime Journal (/run/log/journal/e1303ce4f15440dfb9e0dfe93224b403) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:25:57.027610 systemd-modules-load[237]: Inserted module 'overlay' Sep 13 00:25:57.070375 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:25:57.055529 systemd-modules-load[237]: Inserted module 'br_netfilter' Sep 13 00:25:57.072082 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:25:57.085845 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:25:57.098859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:25:57.103763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:25:57.119253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:25:57.121000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:25:57.123356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:25:57.128873 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:25:57.134553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:25:57.142837 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:25:57.165495 dracut-cmdline[271]: dracut-dracut-053 Sep 13 00:25:57.169486 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:25:57.194642 systemd-resolved[274]: Positive Trust Anchors: Sep 13 00:25:57.195664 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:25:57.195707 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:25:57.218971 systemd-resolved[274]: Defaulting to hostname 'linux'. Sep 13 00:25:57.221640 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:25:57.223343 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:25:57.309567 kernel: SCSI subsystem initialized Sep 13 00:25:57.315532 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:25:57.326525 kernel: iscsi: registered transport (tcp) Sep 13 00:25:57.345555 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:25:57.345649 kernel: QLogic iSCSI HBA Driver Sep 13 00:25:57.432796 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:25:57.446923 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:25:57.474781 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:25:57.474900 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:25:57.474942 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:25:57.538537 kernel: raid6: neonx8 gen() 15648 MB/s Sep 13 00:25:57.555533 kernel: raid6: neonx4 gen() 15534 MB/s Sep 13 00:25:57.572655 kernel: raid6: neonx2 gen() 13123 MB/s Sep 13 00:25:57.591499 kernel: raid6: neonx1 gen() 10429 MB/s Sep 13 00:25:57.606569 kernel: raid6: int64x8 gen() 6714 MB/s Sep 13 00:25:57.623528 kernel: raid6: int64x4 gen() 7286 MB/s Sep 13 00:25:57.640557 kernel: raid6: int64x2 gen() 6054 MB/s Sep 13 00:25:57.657574 kernel: raid6: int64x1 gen() 5000 MB/s Sep 13 00:25:57.657711 kernel: raid6: using algorithm neonx8 gen() 15648 MB/s Sep 13 00:25:57.674526 kernel: raid6: .... xor() 11944 MB/s, rmw enabled Sep 13 00:25:57.674602 kernel: raid6: using neon recovery algorithm Sep 13 00:25:57.683977 kernel: xor: measuring software checksum speed Sep 13 00:25:57.684059 kernel: 8regs : 17784 MB/sec Sep 13 00:25:57.684074 kernel: 32regs : 19646 MB/sec Sep 13 00:25:57.685497 kernel: arm64_neon : 23549 MB/sec Sep 13 00:25:57.685565 kernel: xor: using function: arm64_neon (23549 MB/sec) Sep 13 00:25:57.742623 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:25:57.764681 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:25:57.771809 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:25:57.806730 systemd-udevd[456]: Using default interface naming scheme 'v255'. Sep 13 00:25:57.810765 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:25:57.822928 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:25:57.864409 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Sep 13 00:25:57.937011 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:25:57.945793 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:25:58.008532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:25:58.019028 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:25:58.054752 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:25:58.059240 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:25:58.062216 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:25:58.064228 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:25:58.072790 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:25:58.112768 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:25:58.179519 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:25:58.183714 kernel: ACPI: bus type USB registered Sep 13 00:25:58.183796 kernel: usbcore: registered new interface driver usbfs Sep 13 00:25:58.189364 kernel: usbcore: registered new interface driver hub Sep 13 00:25:58.192250 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:25:58.192345 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Sep 13 00:25:58.199543 kernel: usbcore: registered new device driver usb Sep 13 00:25:58.211882 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:25:58.212575 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:25:58.217295 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:25:58.220317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:25:58.220554 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:25:58.222670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:25:58.234981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:25:58.246745 kernel: sr 0:0:0:0: Power-on or device reset occurred Sep 13 00:25:58.247035 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Sep 13 00:25:58.248960 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:25:58.249017 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:25:58.255538 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:25:58.255797 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Sep 13 00:25:58.255940 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 00:25:58.257671 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Sep 13 00:25:58.257938 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Sep 13 00:25:58.258212 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Sep 13 00:25:58.259559 kernel: hub 1-0:1.0: USB hub found Sep 13 00:25:58.261502 kernel: hub 1-0:1.0: 4 ports detected Sep 13 00:25:58.261762 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 00:25:58.261938 kernel: hub 2-0:1.0: USB hub found Sep 13 00:25:58.262498 kernel: hub 2-0:1.0: 4 ports detected Sep 13 00:25:58.274888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:25:58.280205 kernel: sd 0:0:0:1: Power-on or device reset occurred Sep 13 00:25:58.280568 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Sep 13 00:25:58.282975 kernel: sd 0:0:0:1: [sda] Write Protect is off Sep 13 00:25:58.283342 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Sep 13 00:25:58.283534 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:25:58.290807 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:25:58.290877 kernel: GPT:17805311 != 80003071 Sep 13 00:25:58.290892 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:25:58.290904 kernel: GPT:17805311 != 80003071 Sep 13 00:25:58.290914 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:25:58.290926 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:25:58.288796 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:25:58.294551 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Sep 13 00:25:58.325951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:25:58.361257 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (530) Sep 13 00:25:58.366505 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (534) Sep 13 00:25:58.372339 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Sep 13 00:25:58.378754 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Sep 13 00:25:58.386750 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Sep 13 00:25:58.390038 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Sep 13 00:25:58.400280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:25:58.410744 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:25:58.440081 disk-uuid[576]: Primary Header is updated. Sep 13 00:25:58.440081 disk-uuid[576]: Secondary Entries is updated. Sep 13 00:25:58.440081 disk-uuid[576]: Secondary Header is updated. Sep 13 00:25:58.446523 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:25:58.454508 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:25:58.459517 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:25:58.506297 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 00:25:58.649434 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Sep 13 00:25:58.649535 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Sep 13 00:25:58.652487 kernel: usbcore: registered new interface driver usbhid Sep 13 00:25:58.652570 kernel: usbhid: USB HID core driver Sep 13 00:25:58.749531 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Sep 13 00:25:58.883499 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Sep 13 00:25:58.937577 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Sep 13 00:25:59.466580 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 13 00:25:59.467805 disk-uuid[577]: The operation has completed successfully. Sep 13 00:25:59.568853 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:25:59.569014 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:25:59.580848 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:25:59.595064 sh[594]: Success Sep 13 00:25:59.614655 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:25:59.696938 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:25:59.712286 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:25:59.719736 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:25:59.749902 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:25:59.749987 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:25:59.750004 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:25:59.750020 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:25:59.750050 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:25:59.757520 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:25:59.761179 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:25:59.763323 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:25:59.784905 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:25:59.790823 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:25:59.810505 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:25:59.810596 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:25:59.810614 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:25:59.819005 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:25:59.819129 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:25:59.840240 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:25:59.839842 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:25:59.853982 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:25:59.863821 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:26:00.001181 ignition[675]: Ignition 2.19.0 Sep 13 00:26:00.001192 ignition[675]: Stage: fetch-offline Sep 13 00:26:00.001243 ignition[675]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:00.003273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:26:00.001251 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:00.001437 ignition[675]: parsed url from cmdline: "" Sep 13 00:26:00.001442 ignition[675]: no config URL provided Sep 13 00:26:00.001446 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:26:00.001467 ignition[675]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:26:00.001474 ignition[675]: failed to fetch config: resource requires networking Sep 13 00:26:00.001732 ignition[675]: Ignition finished successfully Sep 13 00:26:00.050091 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:26:00.060027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:26:00.088659 systemd-networkd[782]: lo: Link UP Sep 13 00:26:00.088674 systemd-networkd[782]: lo: Gained carrier Sep 13 00:26:00.090753 systemd-networkd[782]: Enumeration completed Sep 13 00:26:00.090943 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:26:00.092145 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:00.092150 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:26:00.092878 systemd[1]: Reached target network.target - Network. Sep 13 00:26:00.094235 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:00.094239 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:26:00.095539 systemd-networkd[782]: eth0: Link UP Sep 13 00:26:00.095545 systemd-networkd[782]: eth0: Gained carrier Sep 13 00:26:00.095557 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:00.103159 systemd-networkd[782]: eth1: Link UP Sep 13 00:26:00.103166 systemd-networkd[782]: eth1: Gained carrier Sep 13 00:26:00.103182 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:00.105036 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:26:00.131410 ignition[784]: Ignition 2.19.0 Sep 13 00:26:00.131422 ignition[784]: Stage: fetch Sep 13 00:26:00.131714 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:00.131726 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:00.131849 ignition[784]: parsed url from cmdline: "" Sep 13 00:26:00.131853 ignition[784]: no config URL provided Sep 13 00:26:00.131859 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:26:00.131867 ignition[784]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:26:00.131891 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Sep 13 00:26:00.134813 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:26:00.150693 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:26:00.159658 systemd-networkd[782]: eth0: DHCPv4 address 78.46.184.112/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:26:00.334933 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Sep 13 00:26:00.341946 ignition[784]: GET result: OK Sep 13 00:26:00.342145 ignition[784]: parsing config with SHA512: 45a73611cfdc0ba3c507d1f4659d9222a7db0fadbe86edaf073e7f7672b6db90e20564b624255601fa0083b501a9daafbea5dc35478d06ae0afc6f771bab3dc3 Sep 13 00:26:00.351598 unknown[784]: fetched base config from "system" Sep 13 00:26:00.352308 ignition[784]: fetch: fetch complete Sep 13 00:26:00.351626 unknown[784]: fetched base config from "system" Sep 13 00:26:00.352320 ignition[784]: fetch: fetch passed Sep 13 00:26:00.351634 unknown[784]: fetched user config from "hetzner" Sep 13 00:26:00.352397 ignition[784]: Ignition finished successfully Sep 13 00:26:00.356352 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:26:00.363768 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:26:00.407999 ignition[791]: Ignition 2.19.0 Sep 13 00:26:00.408010 ignition[791]: Stage: kargs Sep 13 00:26:00.411629 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:00.411654 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:00.415893 ignition[791]: kargs: kargs passed Sep 13 00:26:00.416662 ignition[791]: Ignition finished successfully Sep 13 00:26:00.418650 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:26:00.425587 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:26:00.444976 ignition[798]: Ignition 2.19.0 Sep 13 00:26:00.444994 ignition[798]: Stage: disks Sep 13 00:26:00.445337 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:00.445349 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:00.447511 ignition[798]: disks: disks passed Sep 13 00:26:00.447596 ignition[798]: Ignition finished successfully Sep 13 00:26:00.449712 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:26:00.453377 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:26:00.455148 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:26:00.457859 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:26:00.459583 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:26:00.460995 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:26:00.474426 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:26:00.505365 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 00:26:00.514337 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:26:00.539327 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:26:00.624672 kernel: EXT4-fs (sda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:26:00.625711 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:26:00.628775 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:26:00.641845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:26:00.657688 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:26:00.663875 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:26:00.666349 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:26:00.666412 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:26:00.671384 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:26:00.682247 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:26:00.684930 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Sep 13 00:26:00.688515 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:26:00.688601 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:26:00.688616 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:26:00.696550 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:26:00.696638 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:26:00.713500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:26:00.770253 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:26:00.778298 coreos-metadata[816]: Sep 13 00:26:00.778 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Sep 13 00:26:00.781928 coreos-metadata[816]: Sep 13 00:26:00.781 INFO Fetch successful Sep 13 00:26:00.781928 coreos-metadata[816]: Sep 13 00:26:00.781 INFO wrote hostname ci-4081-3-5-n-c2bbffc425 to /sysroot/etc/hostname Sep 13 00:26:00.785582 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:26:00.788764 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:26:00.802280 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:26:00.809487 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:26:00.973763 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:26:00.979662 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:26:00.983752 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:26:00.999530 kernel: BTRFS info (device sda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:26:01.001049 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:26:01.040526 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:26:01.047365 ignition[931]: INFO : Ignition 2.19.0 Sep 13 00:26:01.047365 ignition[931]: INFO : Stage: mount Sep 13 00:26:01.048783 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:01.048783 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:01.051307 ignition[931]: INFO : mount: mount passed Sep 13 00:26:01.051307 ignition[931]: INFO : Ignition finished successfully Sep 13 00:26:01.051733 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:26:01.064190 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:26:01.084044 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:26:01.114594 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Sep 13 00:26:01.116809 kernel: BTRFS info (device sda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:26:01.116902 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:26:01.116940 kernel: BTRFS info (device sda6): using free space tree Sep 13 00:26:01.121784 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 13 00:26:01.121838 kernel: BTRFS info (device sda6): auto enabling async discard Sep 13 00:26:01.125403 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:26:01.169506 ignition[960]: INFO : Ignition 2.19.0 Sep 13 00:26:01.169506 ignition[960]: INFO : Stage: files Sep 13 00:26:01.169506 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:01.169506 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:01.173210 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:26:01.175968 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:26:01.177205 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:26:01.185598 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:26:01.187870 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:26:01.190162 unknown[960]: wrote ssh authorized keys file for user: core Sep 13 00:26:01.192002 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:26:01.197285 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:26:01.197285 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:26:01.197285 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:26:01.197285 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:26:01.399664 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:26:01.667667 systemd-networkd[782]: eth1: Gained IPv6LL Sep 13 00:26:01.796509 systemd-networkd[782]: eth0: Gained IPv6LL Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:26:02.166532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:26:02.188861 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:26:02.188861 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:26:02.188861 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:26:02.445852 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:26:02.707103 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:26:02.707103 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:26:02.710762 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:26:02.710762 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:26:02.710762 ignition[960]: INFO : files: files passed Sep 13 00:26:02.710762 ignition[960]: INFO : Ignition finished successfully Sep 13 00:26:02.715634 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:26:02.724767 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:26:02.735590 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:26:02.752147 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:26:02.754668 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:26:02.767814 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:26:02.767814 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:26:02.773332 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:26:02.777229 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:26:02.778669 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:26:02.785770 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:26:02.855560 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:26:02.855733 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:26:02.858198 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:26:02.859828 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:26:02.861583 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:26:02.866917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:26:02.905716 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:26:02.918941 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:26:02.944556 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:26:02.946196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:26:02.948891 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:26:02.950645 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:26:02.950863 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:26:02.955414 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:26:02.956956 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:26:02.961158 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:26:02.963402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:26:02.965823 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:26:02.969595 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:26:02.971946 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:26:02.973633 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:26:02.976285 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:26:02.978304 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:26:02.980583 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:26:02.980828 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:26:02.983738 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:26:02.986506 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:26:02.990200 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:26:02.993623 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:26:02.997422 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:26:02.997643 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:26:03.002177 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:26:03.002349 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:26:03.004523 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:26:03.004666 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:26:03.006186 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:26:03.006514 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:26:03.024316 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:26:03.027195 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:26:03.027417 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:26:03.031865 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:26:03.032585 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:26:03.032775 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:26:03.035328 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:26:03.035875 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:26:03.045291 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:26:03.045636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:26:03.059572 ignition[1012]: INFO : Ignition 2.19.0 Sep 13 00:26:03.059572 ignition[1012]: INFO : Stage: umount Sep 13 00:26:03.063486 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:26:03.063486 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Sep 13 00:26:03.063486 ignition[1012]: INFO : umount: umount passed Sep 13 00:26:03.063486 ignition[1012]: INFO : Ignition finished successfully Sep 13 00:26:03.066402 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:26:03.072982 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:26:03.073187 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:26:03.079727 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:26:03.079800 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:26:03.081216 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:26:03.081287 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:26:03.084351 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:26:03.084442 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:26:03.085569 systemd[1]: Stopped target network.target - Network. Sep 13 00:26:03.092726 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:26:03.092846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:26:03.094140 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:26:03.097904 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:26:03.101572 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:26:03.105278 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:26:03.107380 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:26:03.109411 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:26:03.110356 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:26:03.111761 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:26:03.111873 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:26:03.112938 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:26:03.113083 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:26:03.114290 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:26:03.114368 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:26:03.115797 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:26:03.116913 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:26:03.118450 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:26:03.118607 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:26:03.120104 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:26:03.120243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:26:03.130240 systemd-networkd[782]: eth0: DHCPv6 lease lost Sep 13 00:26:03.132380 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:26:03.132567 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:26:03.135620 systemd-networkd[782]: eth1: DHCPv6 lease lost Sep 13 00:26:03.136985 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:26:03.137152 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:26:03.140525 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:26:03.141023 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:26:03.143831 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:26:03.143875 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:26:03.151891 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:26:03.152700 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:26:03.152898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:26:03.155238 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:26:03.155320 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:26:03.157475 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:26:03.157561 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:26:03.158613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:26:03.180570 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:26:03.180786 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:26:03.183983 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:26:03.184344 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:26:03.186367 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:26:03.186430 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:26:03.187389 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:26:03.187507 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:26:03.188227 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:26:03.188297 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:26:03.189947 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:26:03.190010 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:26:03.192140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:26:03.192213 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:26:03.199983 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:26:03.200903 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:26:03.200997 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:26:03.204851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:26:03.204938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:26:03.232084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:26:03.232471 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:26:03.235759 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:26:03.252935 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:26:03.267729 systemd[1]: Switching root. Sep 13 00:26:03.307098 systemd-journald[235]: Journal stopped Sep 13 00:26:04.619000 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Sep 13 00:26:04.619146 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:26:04.619166 kernel: SELinux: policy capability open_perms=1 Sep 13 00:26:04.619178 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:26:04.619195 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:26:04.619207 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:26:04.619221 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:26:04.619234 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:26:04.619245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:26:04.619258 kernel: audit: type=1403 audit(1757723163.562:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:26:04.619270 systemd[1]: Successfully loaded SELinux policy in 42.015ms. Sep 13 00:26:04.619294 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.248ms. Sep 13 00:26:04.619308 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:26:04.619321 systemd[1]: Detected virtualization kvm. Sep 13 00:26:04.619336 systemd[1]: Detected architecture arm64. Sep 13 00:26:04.619350 systemd[1]: Detected first boot. Sep 13 00:26:04.619364 systemd[1]: Hostname set to . Sep 13 00:26:04.619378 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:26:04.619391 zram_generator::config[1074]: No configuration found. Sep 13 00:26:04.619403 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:26:04.619415 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:26:04.619426 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 13 00:26:04.619440 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:26:04.619452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:26:04.620754 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:26:04.620770 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:26:04.620785 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:26:04.620798 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:26:04.620812 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:26:04.620825 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:26:04.620839 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:26:04.620862 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:26:04.620875 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:26:04.620888 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:26:04.620901 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:26:04.620914 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:26:04.620927 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:26:04.620937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:26:04.620949 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:26:04.620959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:26:04.620977 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:26:04.620994 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:26:04.621010 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:26:04.621027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:26:04.621106 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:26:04.621120 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:26:04.621133 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:26:04.621149 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:26:04.621163 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:26:04.621174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:26:04.621186 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:26:04.621197 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:26:04.621208 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:26:04.621218 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:26:04.621229 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:26:04.621243 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:26:04.621258 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:26:04.621272 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:26:04.621285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:26:04.621307 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:26:04.621323 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:26:04.621342 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:26:04.621355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:26:04.621369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:26:04.621381 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:26:04.621392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:26:04.621403 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:26:04.621415 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:26:04.621429 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:26:04.621444 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:26:04.621492 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:26:04.621510 kernel: ACPI: bus type drm_connector registered Sep 13 00:26:04.621522 kernel: fuse: init (API version 7.39) Sep 13 00:26:04.621532 kernel: loop: module loaded Sep 13 00:26:04.621543 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:26:04.621555 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:26:04.621565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:26:04.621576 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:26:04.621590 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:26:04.621601 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:26:04.621612 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:26:04.621623 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:26:04.621634 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:26:04.621644 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:26:04.621660 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:26:04.621672 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:26:04.621731 systemd-journald[1154]: Collecting audit messages is disabled. Sep 13 00:26:04.621769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:26:04.621782 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:26:04.621799 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:26:04.621815 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:26:04.621829 systemd-journald[1154]: Journal started Sep 13 00:26:04.621854 systemd-journald[1154]: Runtime Journal (/run/log/journal/e1303ce4f15440dfb9e0dfe93224b403) is 8.0M, max 76.6M, 68.6M free. Sep 13 00:26:04.623425 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:26:04.627233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:26:04.627547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:26:04.629073 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:26:04.629297 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:26:04.630697 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:26:04.630923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:26:04.634506 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:26:04.635993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:26:04.637410 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:26:04.639276 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:26:04.664959 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:26:04.674741 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:26:04.678582 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:26:04.682631 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:26:04.687080 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:26:04.691719 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:26:04.694617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:26:04.708723 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:26:04.709686 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:26:04.716770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:26:04.732666 systemd-journald[1154]: Time spent on flushing to /var/log/journal/e1303ce4f15440dfb9e0dfe93224b403 is 82.902ms for 1109 entries. Sep 13 00:26:04.732666 systemd-journald[1154]: System Journal (/var/log/journal/e1303ce4f15440dfb9e0dfe93224b403) is 8.0M, max 584.8M, 576.8M free. Sep 13 00:26:04.831781 systemd-journald[1154]: Received client request to flush runtime journal. Sep 13 00:26:04.738564 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:26:04.762953 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:26:04.768023 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:26:04.808228 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:26:04.811989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:26:04.841245 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:26:04.852414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:26:04.863973 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:26:04.867735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:26:04.888314 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:26:04.894689 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 13 00:26:04.894706 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Sep 13 00:26:04.908336 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:26:04.919956 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:26:04.965683 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:26:04.976765 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:26:05.000697 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 13 00:26:05.000719 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Sep 13 00:26:05.010851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:26:05.473349 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:26:05.482106 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:26:05.531731 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Sep 13 00:26:05.565222 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:26:05.579838 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:26:05.614574 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:26:05.684133 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 13 00:26:05.760646 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:26:05.867599 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1245) Sep 13 00:26:05.884702 systemd-networkd[1244]: lo: Link UP Sep 13 00:26:05.885224 systemd-networkd[1244]: lo: Gained carrier Sep 13 00:26:05.888269 systemd-networkd[1244]: Enumeration completed Sep 13 00:26:05.890083 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:05.890214 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:26:05.890512 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:26:05.891889 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:05.891965 systemd-networkd[1244]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:26:05.892845 systemd-networkd[1244]: eth0: Link UP Sep 13 00:26:05.892978 systemd-networkd[1244]: eth0: Gained carrier Sep 13 00:26:05.893077 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:05.900507 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:26:05.911952 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:26:05.912161 systemd-networkd[1244]: eth1: Link UP Sep 13 00:26:05.912167 systemd-networkd[1244]: eth1: Gained carrier Sep 13 00:26:05.912195 systemd-networkd[1244]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:05.955751 systemd-networkd[1244]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Sep 13 00:26:05.960577 systemd-networkd[1244]: eth0: DHCPv4 address 78.46.184.112/32, gateway 172.31.1.1 acquired from 172.31.1.1 Sep 13 00:26:05.965177 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:26:05.984072 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:26:05.991783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:26:05.998181 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:26:06.008818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:26:06.012437 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:26:06.014670 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:26:06.015332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:26:06.015605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:26:06.046512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Sep 13 00:26:06.057342 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:26:06.058136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:26:06.063361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:26:06.070662 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:26:06.071003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:26:06.073358 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:26:06.100997 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Sep 13 00:26:06.101141 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:26:06.101158 kernel: [drm] features: -context_init Sep 13 00:26:06.102522 kernel: [drm] number of scanouts: 1 Sep 13 00:26:06.102616 kernel: [drm] number of cap sets: 0 Sep 13 00:26:06.104244 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Sep 13 00:26:06.116547 kernel: Console: switching to colour frame buffer device 160x50 Sep 13 00:26:06.137530 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:26:06.146218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:26:06.236584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:26:06.268605 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:26:06.277715 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:26:06.302567 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:26:06.332185 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:26:06.335252 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:26:06.343759 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:26:06.351653 lvm[1309]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:26:06.383412 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:26:06.385200 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:26:06.386843 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:26:06.387088 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:26:06.388154 systemd[1]: Reached target machines.target - Containers. Sep 13 00:26:06.391161 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:26:06.399782 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:26:06.404739 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:26:06.407763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:26:06.415784 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:26:06.419795 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:26:06.433296 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:26:06.437635 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:26:06.451403 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:26:06.465536 kernel: loop0: detected capacity change from 0 to 203944 Sep 13 00:26:06.484545 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:26:06.491615 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:26:06.502218 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:26:06.535706 kernel: loop1: detected capacity change from 0 to 114328 Sep 13 00:26:06.572249 kernel: loop2: detected capacity change from 0 to 8 Sep 13 00:26:06.598646 kernel: loop3: detected capacity change from 0 to 114432 Sep 13 00:26:06.671538 kernel: loop4: detected capacity change from 0 to 203944 Sep 13 00:26:06.701190 kernel: loop5: detected capacity change from 0 to 114328 Sep 13 00:26:06.712629 kernel: loop6: detected capacity change from 0 to 8 Sep 13 00:26:06.719982 kernel: loop7: detected capacity change from 0 to 114432 Sep 13 00:26:06.728995 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Sep 13 00:26:06.730180 (sd-merge)[1330]: Merged extensions into '/usr'. Sep 13 00:26:06.748991 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:26:06.749056 systemd[1]: Reloading... Sep 13 00:26:06.883637 zram_generator::config[1361]: No configuration found. Sep 13 00:26:07.003180 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:26:07.020791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:26:07.087106 systemd[1]: Reloading finished in 337 ms. Sep 13 00:26:07.109447 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:26:07.114342 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:26:07.129932 systemd[1]: Starting ensure-sysext.service... Sep 13 00:26:07.135985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:26:07.145762 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:26:07.146001 systemd[1]: Reloading... Sep 13 00:26:07.189151 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:26:07.189503 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:26:07.190354 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:26:07.190639 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 13 00:26:07.190694 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 13 00:26:07.194789 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:26:07.194806 systemd-tmpfiles[1403]: Skipping /boot Sep 13 00:26:07.206850 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:26:07.206873 systemd-tmpfiles[1403]: Skipping /boot Sep 13 00:26:07.276566 zram_generator::config[1441]: No configuration found. Sep 13 00:26:07.363670 systemd-networkd[1244]: eth0: Gained IPv6LL Sep 13 00:26:07.411054 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:26:07.478961 systemd[1]: Reloading finished in 332 ms. Sep 13 00:26:07.500161 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:26:07.501934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:26:07.523908 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:26:07.534829 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:26:07.540088 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:26:07.551640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:26:07.566152 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:26:07.580282 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:26:07.588920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:26:07.602917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:26:07.610262 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:26:07.617309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:26:07.629901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:26:07.630201 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:26:07.641718 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:26:07.647826 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:26:07.656368 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:26:07.659612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:26:07.665492 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:26:07.667776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:26:07.686666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:26:07.694884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:26:07.699845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:26:07.700114 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:26:07.710933 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:26:07.725385 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:26:07.736771 augenrules[1517]: No rules Sep 13 00:26:07.739778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:26:07.740114 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:26:07.748843 systemd-networkd[1244]: eth1: Gained IPv6LL Sep 13 00:26:07.754923 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:26:07.756233 systemd-resolved[1482]: Positive Trust Anchors: Sep 13 00:26:07.756254 systemd-resolved[1482]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:26:07.756290 systemd-resolved[1482]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:26:07.764188 systemd[1]: Finished ensure-sysext.service. Sep 13 00:26:07.769377 systemd-resolved[1482]: Using system hostname 'ci-4081-3-5-n-c2bbffc425'. Sep 13 00:26:07.770286 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:26:07.776211 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:26:07.778526 systemd[1]: Reached target network.target - Network. Sep 13 00:26:07.779255 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:26:07.780070 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:26:07.782374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:26:07.788744 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:26:07.792905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:26:07.806883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:26:07.807820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:26:07.813837 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:26:07.815730 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:26:07.819691 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:26:07.819945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:26:07.821601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:26:07.821829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:26:07.823292 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:26:07.827169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:26:07.834893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:26:07.835115 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:26:07.897775 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:26:07.900927 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:26:07.903781 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:26:07.905145 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:26:07.906142 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:26:07.907115 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:26:07.907153 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:26:07.907846 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:26:07.908818 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:26:07.909949 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:26:07.910752 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:26:07.912770 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:26:07.915990 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:26:07.919906 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:26:07.925067 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:26:07.926336 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:26:07.927352 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:26:07.928328 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:26:07.928412 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:26:07.928451 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:26:07.931796 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:26:07.935803 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:26:07.946786 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:26:07.950636 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:26:07.967603 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:26:07.972660 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:26:07.988724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:26:08.004617 jq[1549]: false Sep 13 00:26:08.005940 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:26:08.012697 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:26:08.015582 dbus-daemon[1548]: [system] SELinux support is enabled Sep 13 00:26:08.034243 systemd-timesyncd[1536]: Contacted time server 78.47.249.55:123 (0.flatcar.pool.ntp.org). Sep 13 00:26:08.034316 systemd-timesyncd[1536]: Initial clock synchronization to Sat 2025-09-13 00:26:08.205660 UTC. Sep 13 00:26:08.040384 coreos-metadata[1546]: Sep 13 00:26:08.037 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Sep 13 00:26:08.038874 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:26:08.042375 coreos-metadata[1546]: Sep 13 00:26:08.042 INFO Fetch successful Sep 13 00:26:08.042375 coreos-metadata[1546]: Sep 13 00:26:08.042 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Sep 13 00:26:08.044261 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Sep 13 00:26:08.049177 coreos-metadata[1546]: Sep 13 00:26:08.047 INFO Fetch successful Sep 13 00:26:08.054817 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:26:08.064353 extend-filesystems[1552]: Found loop4 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found loop5 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found loop6 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found loop7 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found sda Sep 13 00:26:08.064353 extend-filesystems[1552]: Found sda1 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found sda2 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found sda3 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found usr Sep 13 00:26:08.064353 extend-filesystems[1552]: Found sda4 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found sda6 Sep 13 00:26:08.064353 extend-filesystems[1552]: Found sda7 Sep 13 00:26:08.065470 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:26:08.114350 extend-filesystems[1552]: Found sda9 Sep 13 00:26:08.114350 extend-filesystems[1552]: Checking size of /dev/sda9 Sep 13 00:26:08.073417 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:26:08.080735 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:26:08.093295 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:26:08.105497 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:26:08.110562 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:26:08.136299 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:26:08.141651 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:26:08.161318 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:26:08.162736 jq[1572]: true Sep 13 00:26:08.162320 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:26:08.228842 extend-filesystems[1552]: Resized partition /dev/sda9 Sep 13 00:26:08.240485 extend-filesystems[1602]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:26:08.243768 update_engine[1570]: I20250913 00:26:08.237116 1570 main.cc:92] Flatcar Update Engine starting Sep 13 00:26:08.255825 update_engine[1570]: I20250913 00:26:08.252701 1570 update_check_scheduler.cc:74] Next update check in 10m44s Sep 13 00:26:08.256653 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:26:08.271101 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:26:08.273231 tar[1581]: linux-arm64/helm Sep 13 00:26:08.272935 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:26:08.272968 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:26:08.275215 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:26:08.275257 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:26:08.276580 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:26:08.278177 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:26:08.282732 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:26:08.296793 jq[1585]: true Sep 13 00:26:08.299492 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Sep 13 00:26:08.338015 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:26:08.338320 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:26:08.410949 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:26:08.413630 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:26:08.441106 systemd-logind[1566]: New seat seat0. Sep 13 00:26:08.447613 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:26:08.447652 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Sep 13 00:26:08.448768 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:26:08.535503 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Sep 13 00:26:08.562616 extend-filesystems[1602]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 13 00:26:08.562616 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 5 Sep 13 00:26:08.562616 extend-filesystems[1602]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Sep 13 00:26:08.599385 extend-filesystems[1552]: Resized filesystem in /dev/sda9 Sep 13 00:26:08.599385 extend-filesystems[1552]: Found sr0 Sep 13 00:26:08.606416 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:26:08.572354 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:26:08.572682 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:26:08.583282 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:26:08.649770 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1250) Sep 13 00:26:08.642786 systemd[1]: Starting sshkeys.service... Sep 13 00:26:08.689973 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:26:08.702915 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:26:08.787213 coreos-metadata[1656]: Sep 13 00:26:08.786 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Sep 13 00:26:08.790351 coreos-metadata[1656]: Sep 13 00:26:08.790 INFO Fetch successful Sep 13 00:26:08.801704 containerd[1597]: time="2025-09-13T00:26:08.801415840Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:26:08.801612 unknown[1656]: wrote ssh authorized keys file for user: core Sep 13 00:26:08.850538 containerd[1597]: time="2025-09-13T00:26:08.850424560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853058 containerd[1597]: time="2025-09-13T00:26:08.852966400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853058 containerd[1597]: time="2025-09-13T00:26:08.853051520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:26:08.853211 containerd[1597]: time="2025-09-13T00:26:08.853075720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:26:08.853318 containerd[1597]: time="2025-09-13T00:26:08.853289520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:26:08.853355 containerd[1597]: time="2025-09-13T00:26:08.853318120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853722 containerd[1597]: time="2025-09-13T00:26:08.853400920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853722 containerd[1597]: time="2025-09-13T00:26:08.853418800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853815 containerd[1597]: time="2025-09-13T00:26:08.853735800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853815 containerd[1597]: time="2025-09-13T00:26:08.853759320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853815 containerd[1597]: time="2025-09-13T00:26:08.853774840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853815 containerd[1597]: time="2025-09-13T00:26:08.853785600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853947 containerd[1597]: time="2025-09-13T00:26:08.853874720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:08.853972 update-ssh-keys[1663]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:26:08.855751 containerd[1597]: time="2025-09-13T00:26:08.854592600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:26:08.865529 containerd[1597]: time="2025-09-13T00:26:08.858241680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:26:08.865529 containerd[1597]: time="2025-09-13T00:26:08.858301400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:26:08.865529 containerd[1597]: time="2025-09-13T00:26:08.860675520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:26:08.865529 containerd[1597]: time="2025-09-13T00:26:08.860776160Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:26:08.861701 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:26:08.878319 systemd[1]: Finished sshkeys.service. Sep 13 00:26:08.880134 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:26:08.885186 containerd[1597]: time="2025-09-13T00:26:08.883537520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:26:08.885186 containerd[1597]: time="2025-09-13T00:26:08.883735200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:26:08.885186 containerd[1597]: time="2025-09-13T00:26:08.883778440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:26:08.885186 containerd[1597]: time="2025-09-13T00:26:08.883801480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:26:08.885186 containerd[1597]: time="2025-09-13T00:26:08.883819840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:26:08.885186 containerd[1597]: time="2025-09-13T00:26:08.884097920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.886868120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887189800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887218640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887240120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887260760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887279280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887300520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887322000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887342960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887363200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887382960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887400880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887432120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888515 containerd[1597]: time="2025-09-13T00:26:08.887472960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887491280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887510840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887528960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887566040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887586400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887608480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887627160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887648960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887667200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887684200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887710520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887735120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887767960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887786040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.888915 containerd[1597]: time="2025-09-13T00:26:08.887803040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:26:08.889340 containerd[1597]: time="2025-09-13T00:26:08.887944640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:26:08.889340 containerd[1597]: time="2025-09-13T00:26:08.887969480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:26:08.889340 containerd[1597]: time="2025-09-13T00:26:08.887985200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:26:08.889340 containerd[1597]: time="2025-09-13T00:26:08.888018280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:26:08.889340 containerd[1597]: time="2025-09-13T00:26:08.888035760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.889340 containerd[1597]: time="2025-09-13T00:26:08.888053680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:26:08.889340 containerd[1597]: time="2025-09-13T00:26:08.888068640Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:26:08.889340 containerd[1597]: time="2025-09-13T00:26:08.888084240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:26:08.891338 containerd[1597]: time="2025-09-13T00:26:08.891052720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:26:08.892807 containerd[1597]: time="2025-09-13T00:26:08.892673320Z" level=info msg="Connect containerd service" Sep 13 00:26:08.893786 containerd[1597]: time="2025-09-13T00:26:08.893605920Z" level=info msg="using legacy CRI server" Sep 13 00:26:08.893786 containerd[1597]: time="2025-09-13T00:26:08.893719040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:26:08.894264 containerd[1597]: time="2025-09-13T00:26:08.894236120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:26:08.896141 containerd[1597]: time="2025-09-13T00:26:08.895966080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:26:08.896891 containerd[1597]: time="2025-09-13T00:26:08.896429680Z" level=info msg="Start subscribing containerd event" Sep 13 00:26:08.896891 containerd[1597]: time="2025-09-13T00:26:08.896645920Z" level=info msg="Start recovering state" Sep 13 00:26:08.897241 containerd[1597]: time="2025-09-13T00:26:08.897134680Z" level=info msg="Start event monitor" Sep 13 00:26:08.897333 containerd[1597]: time="2025-09-13T00:26:08.897321280Z" level=info msg="Start snapshots syncer" Sep 13 00:26:08.897613 containerd[1597]: time="2025-09-13T00:26:08.897506440Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:26:08.897613 containerd[1597]: time="2025-09-13T00:26:08.897524080Z" level=info msg="Start streaming server" Sep 13 00:26:08.898758 containerd[1597]: time="2025-09-13T00:26:08.898625640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:26:08.898758 containerd[1597]: time="2025-09-13T00:26:08.898712080Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:26:08.899626 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:26:08.902393 containerd[1597]: time="2025-09-13T00:26:08.902179040Z" level=info msg="containerd successfully booted in 0.102679s" Sep 13 00:26:09.398867 tar[1581]: linux-arm64/LICENSE Sep 13 00:26:09.398867 tar[1581]: linux-arm64/README.md Sep 13 00:26:09.430181 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:26:09.534356 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:26:09.585514 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:26:09.595083 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:26:09.632226 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:26:09.633192 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:26:09.645182 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:26:09.684360 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:26:09.700741 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:26:09.720765 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:26:09.724339 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:26:09.733759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:26:09.737129 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:26:09.739101 systemd[1]: Startup finished in 7.899s (kernel) + 6.219s (userspace) = 14.118s. Sep 13 00:26:09.747411 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:26:10.459331 kubelet[1705]: E0913 00:26:10.459211 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:26:10.464832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:26:10.465034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:26:20.609716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:26:20.618976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:26:20.806782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:26:20.816217 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:26:20.885803 kubelet[1730]: E0913 00:26:20.885649 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:26:20.888737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:26:20.889075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:26:31.109716 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:26:31.119901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:26:31.304248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:26:31.316247 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:26:31.372763 kubelet[1750]: E0913 00:26:31.372589 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:26:31.378862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:26:31.379134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:26:39.003130 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:26:39.014182 systemd[1]: Started sshd@0-78.46.184.112:22-147.75.109.163:58388.service - OpenSSH per-connection server daemon (147.75.109.163:58388). Sep 13 00:26:40.003921 sshd[1758]: Accepted publickey for core from 147.75.109.163 port 58388 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:26:40.005995 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:40.017128 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:26:40.024095 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:26:40.028561 systemd-logind[1566]: New session 1 of user core. Sep 13 00:26:40.043312 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:26:40.055627 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:26:40.060975 (systemd)[1764]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:26:40.190044 systemd[1764]: Queued start job for default target default.target. Sep 13 00:26:40.190608 systemd[1764]: Created slice app.slice - User Application Slice. Sep 13 00:26:40.190633 systemd[1764]: Reached target paths.target - Paths. Sep 13 00:26:40.190648 systemd[1764]: Reached target timers.target - Timers. Sep 13 00:26:40.198824 systemd[1764]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:26:40.209567 systemd[1764]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:26:40.209645 systemd[1764]: Reached target sockets.target - Sockets. Sep 13 00:26:40.209658 systemd[1764]: Reached target basic.target - Basic System. Sep 13 00:26:40.209707 systemd[1764]: Reached target default.target - Main User Target. Sep 13 00:26:40.209735 systemd[1764]: Startup finished in 140ms. Sep 13 00:26:40.210289 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:26:40.218200 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:26:40.907949 systemd[1]: Started sshd@1-78.46.184.112:22-147.75.109.163:36806.service - OpenSSH per-connection server daemon (147.75.109.163:36806). Sep 13 00:26:41.609281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 00:26:41.623821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:26:41.756732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:26:41.766073 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:26:41.818638 kubelet[1790]: E0913 00:26:41.818553 1790 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:26:41.824770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:26:41.825012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:26:41.891329 sshd[1776]: Accepted publickey for core from 147.75.109.163 port 36806 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:26:41.894087 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:41.902428 systemd-logind[1566]: New session 2 of user core. Sep 13 00:26:41.914984 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:26:42.579107 sshd[1776]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:42.584138 systemd[1]: sshd@1-78.46.184.112:22-147.75.109.163:36806.service: Deactivated successfully. Sep 13 00:26:42.589830 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:26:42.592098 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:26:42.593446 systemd-logind[1566]: Removed session 2. Sep 13 00:26:42.741106 systemd[1]: Started sshd@2-78.46.184.112:22-147.75.109.163:36816.service - OpenSSH per-connection server daemon (147.75.109.163:36816). Sep 13 00:26:43.725235 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 36816 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:26:43.727438 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:43.733707 systemd-logind[1566]: New session 3 of user core. Sep 13 00:26:43.745355 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:26:44.406192 sshd[1804]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:44.411832 systemd[1]: sshd@2-78.46.184.112:22-147.75.109.163:36816.service: Deactivated successfully. Sep 13 00:26:44.416014 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:26:44.416999 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:26:44.418992 systemd-logind[1566]: Removed session 3. Sep 13 00:26:44.573050 systemd[1]: Started sshd@3-78.46.184.112:22-147.75.109.163:36818.service - OpenSSH per-connection server daemon (147.75.109.163:36818). Sep 13 00:26:45.563262 sshd[1812]: Accepted publickey for core from 147.75.109.163 port 36818 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:26:45.565584 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:45.572725 systemd-logind[1566]: New session 4 of user core. Sep 13 00:26:45.578984 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:26:46.252058 sshd[1812]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:46.256346 systemd[1]: sshd@3-78.46.184.112:22-147.75.109.163:36818.service: Deactivated successfully. Sep 13 00:26:46.260068 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:26:46.262204 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:26:46.263533 systemd-logind[1566]: Removed session 4. Sep 13 00:26:46.426964 systemd[1]: Started sshd@4-78.46.184.112:22-147.75.109.163:36832.service - OpenSSH per-connection server daemon (147.75.109.163:36832). Sep 13 00:26:47.411638 sshd[1820]: Accepted publickey for core from 147.75.109.163 port 36832 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:26:47.413440 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:47.419242 systemd-logind[1566]: New session 5 of user core. Sep 13 00:26:47.437022 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:26:47.952387 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:26:47.952708 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:26:47.968808 sudo[1824]: pam_unix(sudo:session): session closed for user root Sep 13 00:26:48.131250 sshd[1820]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:48.136966 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:26:48.138117 systemd[1]: sshd@4-78.46.184.112:22-147.75.109.163:36832.service: Deactivated successfully. Sep 13 00:26:48.141287 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:26:48.142664 systemd-logind[1566]: Removed session 5. Sep 13 00:26:48.295911 systemd[1]: Started sshd@5-78.46.184.112:22-147.75.109.163:36838.service - OpenSSH per-connection server daemon (147.75.109.163:36838). Sep 13 00:26:49.292606 sshd[1829]: Accepted publickey for core from 147.75.109.163 port 36838 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:26:49.295277 sshd[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:49.301781 systemd-logind[1566]: New session 6 of user core. Sep 13 00:26:49.308020 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:26:49.817363 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:26:49.818906 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:26:49.823605 sudo[1834]: pam_unix(sudo:session): session closed for user root Sep 13 00:26:49.830969 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:26:49.831255 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:26:49.852137 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:26:49.854139 auditctl[1837]: No rules Sep 13 00:26:49.855694 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:26:49.855996 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:26:49.859841 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:26:49.909898 augenrules[1856]: No rules Sep 13 00:26:49.912952 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:26:49.915776 sudo[1833]: pam_unix(sudo:session): session closed for user root Sep 13 00:26:50.076340 sshd[1829]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:50.082297 systemd[1]: sshd@5-78.46.184.112:22-147.75.109.163:36838.service: Deactivated successfully. Sep 13 00:26:50.085869 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:26:50.086959 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:26:50.088747 systemd-logind[1566]: Removed session 6. Sep 13 00:26:50.241880 systemd[1]: Started sshd@6-78.46.184.112:22-147.75.109.163:55266.service - OpenSSH per-connection server daemon (147.75.109.163:55266). Sep 13 00:26:51.215328 sshd[1865]: Accepted publickey for core from 147.75.109.163 port 55266 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:26:51.217331 sshd[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:51.223059 systemd-logind[1566]: New session 7 of user core. Sep 13 00:26:51.230147 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:26:51.736092 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:26:51.736833 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:26:51.858950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 00:26:51.867059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:26:52.033832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:26:52.040830 (kubelet)[1896]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:26:52.070950 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:26:52.072609 (dockerd)[1901]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:26:52.097142 kubelet[1896]: E0913 00:26:52.097084 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:26:52.106042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:26:52.106727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:26:52.336527 dockerd[1901]: time="2025-09-13T00:26:52.335150533Z" level=info msg="Starting up" Sep 13 00:26:52.442048 dockerd[1901]: time="2025-09-13T00:26:52.442003801Z" level=info msg="Loading containers: start." Sep 13 00:26:52.553499 kernel: Initializing XFRM netlink socket Sep 13 00:26:52.630708 systemd-networkd[1244]: docker0: Link UP Sep 13 00:26:52.645807 dockerd[1901]: time="2025-09-13T00:26:52.645718646Z" level=info msg="Loading containers: done." Sep 13 00:26:52.668832 dockerd[1901]: time="2025-09-13T00:26:52.668756652Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:26:52.669132 dockerd[1901]: time="2025-09-13T00:26:52.668925482Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:26:52.669190 dockerd[1901]: time="2025-09-13T00:26:52.669156402Z" level=info msg="Daemon has completed initialization" Sep 13 00:26:52.714990 dockerd[1901]: time="2025-09-13T00:26:52.714819580Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:26:52.715446 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:26:53.622578 update_engine[1570]: I20250913 00:26:53.622103 1570 update_attempter.cc:509] Updating boot flags... Sep 13 00:26:53.662527 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1976) Sep 13 00:26:53.761631 containerd[1597]: time="2025-09-13T00:26:53.761479541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:26:54.456417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471072182.mount: Deactivated successfully. Sep 13 00:26:56.319745 containerd[1597]: time="2025-09-13T00:26:56.319625039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:56.320994 containerd[1597]: time="2025-09-13T00:26:56.320950631Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687423" Sep 13 00:26:56.322204 containerd[1597]: time="2025-09-13T00:26:56.321707421Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:56.326334 containerd[1597]: time="2025-09-13T00:26:56.326282765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:56.327501 containerd[1597]: time="2025-09-13T00:26:56.327441614Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.565920106s" Sep 13 00:26:56.327632 containerd[1597]: time="2025-09-13T00:26:56.327615599Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:26:56.329447 containerd[1597]: time="2025-09-13T00:26:56.329398298Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:26:58.660489 containerd[1597]: time="2025-09-13T00:26:58.658682441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:58.661072 containerd[1597]: time="2025-09-13T00:26:58.660640459Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459787" Sep 13 00:26:58.661935 containerd[1597]: time="2025-09-13T00:26:58.661828176Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:58.666975 containerd[1597]: time="2025-09-13T00:26:58.666900245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:26:58.670109 containerd[1597]: time="2025-09-13T00:26:58.669874357Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 2.34027711s" Sep 13 00:26:58.670109 containerd[1597]: time="2025-09-13T00:26:58.669925724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:26:58.670828 containerd[1597]: time="2025-09-13T00:26:58.670800679Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:27:00.276492 containerd[1597]: time="2025-09-13T00:27:00.274508769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:00.277094 containerd[1597]: time="2025-09-13T00:27:00.277000989Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127526" Sep 13 00:27:00.277334 containerd[1597]: time="2025-09-13T00:27:00.277290744Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:00.281836 containerd[1597]: time="2025-09-13T00:27:00.281784364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:00.283233 containerd[1597]: time="2025-09-13T00:27:00.283182413Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.612341328s" Sep 13 00:27:00.283382 containerd[1597]: time="2025-09-13T00:27:00.283361114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:27:00.284281 containerd[1597]: time="2025-09-13T00:27:00.284197095Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:27:01.642961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531927722.mount: Deactivated successfully. Sep 13 00:27:02.109155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 13 00:27:02.115719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:02.118155 containerd[1597]: time="2025-09-13T00:27:02.117323273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:02.122819 containerd[1597]: time="2025-09-13T00:27:02.122756152Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954933" Sep 13 00:27:02.125626 containerd[1597]: time="2025-09-13T00:27:02.124317924Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:02.129510 containerd[1597]: time="2025-09-13T00:27:02.129267228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:02.130920 containerd[1597]: time="2025-09-13T00:27:02.130827960Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.846391837s" Sep 13 00:27:02.130920 containerd[1597]: time="2025-09-13T00:27:02.130880406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:27:02.132278 containerd[1597]: time="2025-09-13T00:27:02.131985848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:27:02.274796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:02.275334 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:27:02.343040 kubelet[2143]: E0913 00:27:02.342986 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:27:02.345374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:27:02.345543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:27:02.832056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929576373.mount: Deactivated successfully. Sep 13 00:27:03.615327 containerd[1597]: time="2025-09-13T00:27:03.614098074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:03.616924 containerd[1597]: time="2025-09-13T00:27:03.616882328Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Sep 13 00:27:03.618318 containerd[1597]: time="2025-09-13T00:27:03.618275395Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:03.622021 containerd[1597]: time="2025-09-13T00:27:03.621977505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:03.623868 containerd[1597]: time="2025-09-13T00:27:03.623794737Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.491764325s" Sep 13 00:27:03.623961 containerd[1597]: time="2025-09-13T00:27:03.623865504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:27:03.624568 containerd[1597]: time="2025-09-13T00:27:03.624516373Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:27:04.251938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683775139.mount: Deactivated successfully. Sep 13 00:27:04.260712 containerd[1597]: time="2025-09-13T00:27:04.260652140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:04.262164 containerd[1597]: time="2025-09-13T00:27:04.262118288Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Sep 13 00:27:04.263491 containerd[1597]: time="2025-09-13T00:27:04.262935771Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:04.265546 containerd[1597]: time="2025-09-13T00:27:04.265440624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:04.266515 containerd[1597]: time="2025-09-13T00:27:04.266302071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 641.714291ms" Sep 13 00:27:04.266515 containerd[1597]: time="2025-09-13T00:27:04.266357117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:27:04.267242 containerd[1597]: time="2025-09-13T00:27:04.266979460Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:27:04.886212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775277242.mount: Deactivated successfully. Sep 13 00:27:08.285760 containerd[1597]: time="2025-09-13T00:27:08.285671162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:08.287792 containerd[1597]: time="2025-09-13T00:27:08.287742501Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537235" Sep 13 00:27:08.288503 containerd[1597]: time="2025-09-13T00:27:08.287971561Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:08.292978 containerd[1597]: time="2025-09-13T00:27:08.292312935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:08.293945 containerd[1597]: time="2025-09-13T00:27:08.293888791Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.026825844s" Sep 13 00:27:08.293945 containerd[1597]: time="2025-09-13T00:27:08.293940956Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:27:12.359221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Sep 13 00:27:12.368865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:12.523018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:12.524176 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:27:12.570467 kubelet[2293]: E0913 00:27:12.569668 2293 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:27:12.573745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:27:12.573925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:27:13.529940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:13.548960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:13.586988 systemd[1]: Reloading requested from client PID 2309 ('systemctl') (unit session-7.scope)... Sep 13 00:27:13.587175 systemd[1]: Reloading... Sep 13 00:27:13.719490 zram_generator::config[2353]: No configuration found. Sep 13 00:27:13.823829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:27:13.894500 systemd[1]: Reloading finished in 306 ms. Sep 13 00:27:13.952715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:13.959093 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:13.962928 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:27:13.963235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:13.976401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:14.105768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:14.118950 (kubelet)[2412]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:27:14.163850 kubelet[2412]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:27:14.163850 kubelet[2412]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:27:14.163850 kubelet[2412]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:27:14.164282 kubelet[2412]: I0913 00:27:14.163904 2412 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:27:15.362506 kubelet[2412]: I0913 00:27:15.360940 2412 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:27:15.362506 kubelet[2412]: I0913 00:27:15.360995 2412 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:27:15.362506 kubelet[2412]: I0913 00:27:15.361537 2412 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:27:15.392643 kubelet[2412]: E0913 00:27:15.392576 2412 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://78.46.184.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:15.393396 kubelet[2412]: I0913 00:27:15.393367 2412 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:27:15.404205 kubelet[2412]: E0913 00:27:15.404127 2412 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:27:15.404205 kubelet[2412]: I0913 00:27:15.404204 2412 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:27:15.410162 kubelet[2412]: I0913 00:27:15.410111 2412 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:27:15.412386 kubelet[2412]: I0913 00:27:15.412325 2412 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:27:15.412642 kubelet[2412]: I0913 00:27:15.412584 2412 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:27:15.412822 kubelet[2412]: I0913 00:27:15.412628 2412 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-c2bbffc425","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:27:15.412908 kubelet[2412]: I0913 00:27:15.412878 2412 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:27:15.412908 kubelet[2412]: I0913 00:27:15.412891 2412 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:27:15.413225 kubelet[2412]: I0913 00:27:15.413188 2412 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:27:15.418500 kubelet[2412]: I0913 00:27:15.416441 2412 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:27:15.418500 kubelet[2412]: I0913 00:27:15.416500 2412 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:27:15.418500 kubelet[2412]: I0913 00:27:15.416524 2412 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:27:15.418500 kubelet[2412]: I0913 00:27:15.416541 2412 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:27:15.421240 kubelet[2412]: W0913 00:27:15.421066 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.46.184.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c2bbffc425&limit=500&resourceVersion=0": dial tcp 78.46.184.112:6443: connect: connection refused Sep 13 00:27:15.421240 kubelet[2412]: E0913 00:27:15.421243 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://78.46.184.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c2bbffc425&limit=500&resourceVersion=0\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:15.421781 kubelet[2412]: I0913 00:27:15.421719 2412 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:27:15.423396 kubelet[2412]: I0913 00:27:15.422531 2412 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:27:15.423396 kubelet[2412]: W0913 00:27:15.422750 2412 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:27:15.423648 kubelet[2412]: W0913 00:27:15.423589 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.46.184.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 78.46.184.112:6443: connect: connection refused Sep 13 00:27:15.423682 kubelet[2412]: E0913 00:27:15.423660 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://78.46.184.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:15.427541 kubelet[2412]: I0913 00:27:15.426584 2412 server.go:1274] "Started kubelet" Sep 13 00:27:15.428770 kubelet[2412]: I0913 00:27:15.427930 2412 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:27:15.430433 kubelet[2412]: I0913 00:27:15.430389 2412 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:27:15.434314 kubelet[2412]: I0913 00:27:15.433218 2412 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:27:15.434314 kubelet[2412]: I0913 00:27:15.433604 2412 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:27:15.436065 kubelet[2412]: I0913 00:27:15.435985 2412 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:27:15.436971 kubelet[2412]: E0913 00:27:15.435474 2412 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.46.184.112:6443/api/v1/namespaces/default/events\": dial tcp 78.46.184.112:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-5-n-c2bbffc425.1864aff8450c8f3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-5-n-c2bbffc425,UID:ci-4081-3-5-n-c2bbffc425,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-c2bbffc425,},FirstTimestamp:2025-09-13 00:27:15.426537276 +0000 UTC m=+1.303236012,LastTimestamp:2025-09-13 00:27:15.426537276 +0000 UTC m=+1.303236012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-c2bbffc425,}" Sep 13 00:27:15.437615 kubelet[2412]: I0913 00:27:15.437571 2412 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:27:15.441727 kubelet[2412]: I0913 00:27:15.439678 2412 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:27:15.441727 kubelet[2412]: E0913 00:27:15.440160 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:15.441727 kubelet[2412]: I0913 00:27:15.441013 2412 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:27:15.441727 kubelet[2412]: I0913 00:27:15.441071 2412 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:27:15.442125 kubelet[2412]: W0913 00:27:15.442070 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.46.184.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.184.112:6443: connect: connection refused Sep 13 00:27:15.442232 kubelet[2412]: E0913 00:27:15.442127 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://78.46.184.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:15.442303 kubelet[2412]: E0913 00:27:15.442248 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.184.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c2bbffc425?timeout=10s\": dial tcp 78.46.184.112:6443: connect: connection refused" interval="200ms" Sep 13 00:27:15.444626 kubelet[2412]: I0913 00:27:15.444525 2412 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:27:15.444942 kubelet[2412]: I0913 00:27:15.444773 2412 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:27:15.448857 kubelet[2412]: I0913 00:27:15.448810 2412 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:27:15.465173 kubelet[2412]: E0913 00:27:15.465112 2412 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:27:15.479049 kubelet[2412]: I0913 00:27:15.479015 2412 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:27:15.479049 kubelet[2412]: I0913 00:27:15.479040 2412 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:27:15.479049 kubelet[2412]: I0913 00:27:15.479063 2412 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:27:15.480183 kubelet[2412]: I0913 00:27:15.479354 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:27:15.482058 kubelet[2412]: I0913 00:27:15.482020 2412 policy_none.go:49] "None policy: Start" Sep 13 00:27:15.483574 kubelet[2412]: I0913 00:27:15.483498 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:27:15.483574 kubelet[2412]: I0913 00:27:15.483535 2412 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:27:15.483574 kubelet[2412]: I0913 00:27:15.483560 2412 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:27:15.483769 kubelet[2412]: E0913 00:27:15.483618 2412 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:27:15.486862 kubelet[2412]: I0913 00:27:15.486728 2412 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:27:15.486862 kubelet[2412]: I0913 00:27:15.486779 2412 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:27:15.487389 kubelet[2412]: W0913 00:27:15.487121 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.46.184.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.184.112:6443: connect: connection refused Sep 13 00:27:15.487389 kubelet[2412]: E0913 00:27:15.487252 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://78.46.184.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:15.493492 kubelet[2412]: I0913 00:27:15.492898 2412 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:27:15.493492 kubelet[2412]: I0913 00:27:15.493111 2412 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:27:15.493492 kubelet[2412]: I0913 00:27:15.493123 2412 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:27:15.495277 kubelet[2412]: I0913 00:27:15.495243 2412 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:27:15.497383 kubelet[2412]: E0913 00:27:15.497352 2412 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:15.603427 kubelet[2412]: I0913 00:27:15.603394 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.604269 kubelet[2412]: E0913 00:27:15.604237 2412 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://78.46.184.112:6443/api/v1/nodes\": dial tcp 78.46.184.112:6443: connect: connection refused" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.642672 kubelet[2412]: I0913 00:27:15.641951 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.642672 kubelet[2412]: I0913 00:27:15.642024 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77fad0694354b9b3d91ae7e8d9a86782-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-c2bbffc425\" (UID: \"77fad0694354b9b3d91ae7e8d9a86782\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.642672 kubelet[2412]: I0913 00:27:15.642075 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f08c5c26d31ec12b07d429f7ad9ffd8-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c2bbffc425\" (UID: \"9f08c5c26d31ec12b07d429f7ad9ffd8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.642672 kubelet[2412]: I0913 00:27:15.642115 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f08c5c26d31ec12b07d429f7ad9ffd8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c2bbffc425\" (UID: \"9f08c5c26d31ec12b07d429f7ad9ffd8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.642672 kubelet[2412]: I0913 00:27:15.642180 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f08c5c26d31ec12b07d429f7ad9ffd8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-c2bbffc425\" (UID: \"9f08c5c26d31ec12b07d429f7ad9ffd8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.643230 kubelet[2412]: I0913 00:27:15.642229 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.643230 kubelet[2412]: I0913 00:27:15.642270 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.643230 kubelet[2412]: I0913 00:27:15.642309 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.643230 kubelet[2412]: I0913 00:27:15.642349 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.644478 kubelet[2412]: E0913 00:27:15.644361 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.184.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c2bbffc425?timeout=10s\": dial tcp 78.46.184.112:6443: connect: connection refused" interval="400ms" Sep 13 00:27:15.806553 kubelet[2412]: I0913 00:27:15.806405 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.807070 kubelet[2412]: E0913 00:27:15.806990 2412 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://78.46.184.112:6443/api/v1/nodes\": dial tcp 78.46.184.112:6443: connect: connection refused" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:15.893508 containerd[1597]: time="2025-09-13T00:27:15.893220369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-c2bbffc425,Uid:9f08c5c26d31ec12b07d429f7ad9ffd8,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:15.896896 containerd[1597]: time="2025-09-13T00:27:15.896506392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-c2bbffc425,Uid:7c67e065725c4fa3e5fd8b20a93ff1f7,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:15.905883 containerd[1597]: time="2025-09-13T00:27:15.905827346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-c2bbffc425,Uid:77fad0694354b9b3d91ae7e8d9a86782,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:16.045095 kubelet[2412]: E0913 00:27:16.045003 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.184.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c2bbffc425?timeout=10s\": dial tcp 78.46.184.112:6443: connect: connection refused" interval="800ms" Sep 13 00:27:16.210646 kubelet[2412]: I0913 00:27:16.210376 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:16.210990 kubelet[2412]: E0913 00:27:16.210878 2412 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://78.46.184.112:6443/api/v1/nodes\": dial tcp 78.46.184.112:6443: connect: connection refused" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:16.368664 kubelet[2412]: W0913 00:27:16.368571 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.46.184.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.184.112:6443: connect: connection refused Sep 13 00:27:16.368664 kubelet[2412]: E0913 00:27:16.368673 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://78.46.184.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:16.417101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1283855231.mount: Deactivated successfully. Sep 13 00:27:16.421541 kubelet[2412]: W0913 00:27:16.421406 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.46.184.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 78.46.184.112:6443: connect: connection refused Sep 13 00:27:16.421686 kubelet[2412]: E0913 00:27:16.421570 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://78.46.184.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:16.424745 containerd[1597]: time="2025-09-13T00:27:16.424656458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:16.426898 containerd[1597]: time="2025-09-13T00:27:16.426855803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Sep 13 00:27:16.432154 containerd[1597]: time="2025-09-13T00:27:16.432090229Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:16.434985 containerd[1597]: time="2025-09-13T00:27:16.434920096Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:27:16.437126 containerd[1597]: time="2025-09-13T00:27:16.437044196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:27:16.437248 containerd[1597]: time="2025-09-13T00:27:16.437178885Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:16.439410 containerd[1597]: time="2025-09-13T00:27:16.439151935Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:16.441911 containerd[1597]: time="2025-09-13T00:27:16.441864514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:27:16.444401 containerd[1597]: time="2025-09-13T00:27:16.444360679Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.431846ms" Sep 13 00:27:16.449907 containerd[1597]: time="2025-09-13T00:27:16.449716512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.115993ms" Sep 13 00:27:16.450409 containerd[1597]: time="2025-09-13T00:27:16.450381476Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.002976ms" Sep 13 00:27:16.572896 containerd[1597]: time="2025-09-13T00:27:16.572518296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:16.572896 containerd[1597]: time="2025-09-13T00:27:16.572596261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:16.572896 containerd[1597]: time="2025-09-13T00:27:16.572608542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:16.574850 containerd[1597]: time="2025-09-13T00:27:16.573803301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:16.578421 containerd[1597]: time="2025-09-13T00:27:16.578181830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:16.578421 containerd[1597]: time="2025-09-13T00:27:16.578317959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:16.578421 containerd[1597]: time="2025-09-13T00:27:16.578336520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:16.579955 containerd[1597]: time="2025-09-13T00:27:16.579667608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:16.579955 containerd[1597]: time="2025-09-13T00:27:16.579028326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:16.579955 containerd[1597]: time="2025-09-13T00:27:16.579080769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:16.579955 containerd[1597]: time="2025-09-13T00:27:16.579092130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:16.579955 containerd[1597]: time="2025-09-13T00:27:16.579170415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:16.667748 containerd[1597]: time="2025-09-13T00:27:16.666160716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-5-n-c2bbffc425,Uid:9f08c5c26d31ec12b07d429f7ad9ffd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ca728c19ea9c7276a5512d3c90bc133699b44f508365673d0b1d0c0268c8067\"" Sep 13 00:27:16.671623 containerd[1597]: time="2025-09-13T00:27:16.671583674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-5-n-c2bbffc425,Uid:77fad0694354b9b3d91ae7e8d9a86782,Namespace:kube-system,Attempt:0,} returns sandbox id \"206454aaa01d89eaeaeefde5318fe9ed8815eebc24278c218b01e467fbb9ea28\"" Sep 13 00:27:16.678114 containerd[1597]: time="2025-09-13T00:27:16.677889650Z" level=info msg="CreateContainer within sandbox \"206454aaa01d89eaeaeefde5318fe9ed8815eebc24278c218b01e467fbb9ea28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:27:16.678282 containerd[1597]: time="2025-09-13T00:27:16.677933813Z" level=info msg="CreateContainer within sandbox \"2ca728c19ea9c7276a5512d3c90bc133699b44f508365673d0b1d0c0268c8067\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:27:16.680166 containerd[1597]: time="2025-09-13T00:27:16.679908943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-5-n-c2bbffc425,Uid:7c67e065725c4fa3e5fd8b20a93ff1f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b93c511ada9e7530422ef1587c4b4e5244ef866d86241349872a9d174381388\"" Sep 13 00:27:16.683787 containerd[1597]: time="2025-09-13T00:27:16.683571505Z" level=info msg="CreateContainer within sandbox \"1b93c511ada9e7530422ef1587c4b4e5244ef866d86241349872a9d174381388\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:27:16.702611 containerd[1597]: time="2025-09-13T00:27:16.702065925Z" level=info msg="CreateContainer within sandbox \"206454aaa01d89eaeaeefde5318fe9ed8815eebc24278c218b01e467fbb9ea28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7502de30146a52ac615acc1b2cc46dae34272494274ab6e360ad451aa3ba403\"" Sep 13 00:27:16.703199 containerd[1597]: time="2025-09-13T00:27:16.703097553Z" level=info msg="StartContainer for \"a7502de30146a52ac615acc1b2cc46dae34272494274ab6e360ad451aa3ba403\"" Sep 13 00:27:16.705259 containerd[1597]: time="2025-09-13T00:27:16.704998479Z" level=info msg="CreateContainer within sandbox \"2ca728c19ea9c7276a5512d3c90bc133699b44f508365673d0b1d0c0268c8067\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"535f624958b931ad82c973ad73eeb377360ca8829e530cef557af1e73482a3c4\"" Sep 13 00:27:16.708035 containerd[1597]: time="2025-09-13T00:27:16.707792823Z" level=info msg="StartContainer for \"535f624958b931ad82c973ad73eeb377360ca8829e530cef557af1e73482a3c4\"" Sep 13 00:27:16.709360 containerd[1597]: time="2025-09-13T00:27:16.707844587Z" level=info msg="CreateContainer within sandbox \"1b93c511ada9e7530422ef1587c4b4e5244ef866d86241349872a9d174381388\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"17169d3958ad163de24a8df5f880fbc24d89b96af658eaa6b78db8f5a45e42de\"" Sep 13 00:27:16.712484 containerd[1597]: time="2025-09-13T00:27:16.710762619Z" level=info msg="StartContainer for \"17169d3958ad163de24a8df5f880fbc24d89b96af658eaa6b78db8f5a45e42de\"" Sep 13 00:27:16.816659 containerd[1597]: time="2025-09-13T00:27:16.816614245Z" level=info msg="StartContainer for \"a7502de30146a52ac615acc1b2cc46dae34272494274ab6e360ad451aa3ba403\" returns successfully" Sep 13 00:27:16.829043 kubelet[2412]: W0913 00:27:16.828896 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.46.184.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.184.112:6443: connect: connection refused Sep 13 00:27:16.829043 kubelet[2412]: E0913 00:27:16.828976 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://78.46.184.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:16.835662 containerd[1597]: time="2025-09-13T00:27:16.835608138Z" level=info msg="StartContainer for \"17169d3958ad163de24a8df5f880fbc24d89b96af658eaa6b78db8f5a45e42de\" returns successfully" Sep 13 00:27:16.844998 containerd[1597]: time="2025-09-13T00:27:16.844952395Z" level=info msg="StartContainer for \"535f624958b931ad82c973ad73eeb377360ca8829e530cef557af1e73482a3c4\" returns successfully" Sep 13 00:27:16.846894 kubelet[2412]: E0913 00:27:16.846856 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.184.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-5-n-c2bbffc425?timeout=10s\": dial tcp 78.46.184.112:6443: connect: connection refused" interval="1.6s" Sep 13 00:27:16.849744 kubelet[2412]: W0913 00:27:16.849531 2412 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.46.184.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c2bbffc425&limit=500&resourceVersion=0": dial tcp 78.46.184.112:6443: connect: connection refused Sep 13 00:27:16.850270 kubelet[2412]: E0913 00:27:16.850143 2412 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://78.46.184.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-5-n-c2bbffc425&limit=500&resourceVersion=0\": dial tcp 78.46.184.112:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:27:17.015688 kubelet[2412]: I0913 00:27:17.015659 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:19.116905 kubelet[2412]: E0913 00:27:19.116842 2412 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-5-n-c2bbffc425\" not found" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:19.148084 kubelet[2412]: I0913 00:27:19.148039 2412 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:19.148084 kubelet[2412]: E0913 00:27:19.148086 2412 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-5-n-c2bbffc425\": node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:19.180939 kubelet[2412]: E0913 00:27:19.180878 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:19.281361 kubelet[2412]: E0913 00:27:19.281290 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:19.382635 kubelet[2412]: E0913 00:27:19.382503 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:19.483063 kubelet[2412]: E0913 00:27:19.482992 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:19.583264 kubelet[2412]: E0913 00:27:19.583175 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:19.684048 kubelet[2412]: E0913 00:27:19.683857 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:19.784575 kubelet[2412]: E0913 00:27:19.784504 2412 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:20.420341 kubelet[2412]: I0913 00:27:20.420187 2412 apiserver.go:52] "Watching apiserver" Sep 13 00:27:20.442208 kubelet[2412]: I0913 00:27:20.442167 2412 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:27:21.078166 systemd[1]: Reloading requested from client PID 2685 ('systemctl') (unit session-7.scope)... Sep 13 00:27:21.078188 systemd[1]: Reloading... Sep 13 00:27:21.203524 zram_generator::config[2721]: No configuration found. Sep 13 00:27:21.330343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:27:21.412790 systemd[1]: Reloading finished in 334 ms. Sep 13 00:27:21.454631 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:21.469326 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:27:21.470105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:21.481908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:27:21.637791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:27:21.641268 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:27:21.701552 kubelet[2780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:27:21.702604 kubelet[2780]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:27:21.702604 kubelet[2780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:27:21.702604 kubelet[2780]: I0913 00:27:21.701787 2780 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:27:21.714836 kubelet[2780]: I0913 00:27:21.714792 2780 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:27:21.715616 kubelet[2780]: I0913 00:27:21.714918 2780 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:27:21.715616 kubelet[2780]: I0913 00:27:21.715176 2780 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:27:21.718096 kubelet[2780]: I0913 00:27:21.716786 2780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:27:21.719834 kubelet[2780]: I0913 00:27:21.719634 2780 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:27:21.726846 kubelet[2780]: E0913 00:27:21.726751 2780 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:27:21.727018 kubelet[2780]: I0913 00:27:21.727004 2780 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:27:21.731653 kubelet[2780]: I0913 00:27:21.731611 2780 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:27:21.733733 kubelet[2780]: I0913 00:27:21.732161 2780 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:27:21.733733 kubelet[2780]: I0913 00:27:21.732266 2780 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:27:21.733733 kubelet[2780]: I0913 00:27:21.732293 2780 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-5-n-c2bbffc425","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:27:21.733733 kubelet[2780]: I0913 00:27:21.732512 2780 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:27:21.734001 kubelet[2780]: I0913 00:27:21.732525 2780 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:27:21.734001 kubelet[2780]: I0913 00:27:21.732564 2780 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:27:21.734001 kubelet[2780]: I0913 00:27:21.732664 2780 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:27:21.734001 kubelet[2780]: I0913 00:27:21.732677 2780 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:27:21.734001 kubelet[2780]: I0913 00:27:21.732699 2780 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:27:21.734001 kubelet[2780]: I0913 00:27:21.732712 2780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:27:21.735934 kubelet[2780]: I0913 00:27:21.735889 2780 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:27:21.743119 kubelet[2780]: I0913 00:27:21.736452 2780 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:27:21.744074 kubelet[2780]: I0913 00:27:21.744054 2780 server.go:1274] "Started kubelet" Sep 13 00:27:21.751476 kubelet[2780]: I0913 00:27:21.749170 2780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:27:21.751476 kubelet[2780]: I0913 00:27:21.749610 2780 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:27:21.751476 kubelet[2780]: I0913 00:27:21.750737 2780 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:27:21.752238 kubelet[2780]: I0913 00:27:21.752199 2780 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:27:21.754748 kubelet[2780]: I0913 00:27:21.754722 2780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:27:21.764801 kubelet[2780]: I0913 00:27:21.761925 2780 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:27:21.769281 kubelet[2780]: I0913 00:27:21.767016 2780 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:27:21.777325 kubelet[2780]: I0913 00:27:21.767573 2780 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:27:21.788415 kubelet[2780]: E0913 00:27:21.767774 2780 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-5-n-c2bbffc425\" not found" Sep 13 00:27:21.790363 kubelet[2780]: I0913 00:27:21.777618 2780 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:27:21.790522 kubelet[2780]: I0913 00:27:21.777905 2780 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:27:21.790712 kubelet[2780]: I0913 00:27:21.790686 2780 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:27:21.794205 kubelet[2780]: E0913 00:27:21.794167 2780 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:27:21.796556 kubelet[2780]: I0913 00:27:21.795394 2780 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:27:21.806139 kubelet[2780]: I0913 00:27:21.806087 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:27:21.808955 kubelet[2780]: I0913 00:27:21.808923 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:27:21.809176 kubelet[2780]: I0913 00:27:21.809164 2780 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:27:21.809249 kubelet[2780]: I0913 00:27:21.809241 2780 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:27:21.809360 kubelet[2780]: E0913 00:27:21.809335 2780 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:27:21.871878 kubelet[2780]: I0913 00:27:21.871841 2780 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:27:21.871878 kubelet[2780]: I0913 00:27:21.871865 2780 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:27:21.871878 kubelet[2780]: I0913 00:27:21.871886 2780 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:27:21.872960 kubelet[2780]: I0913 00:27:21.872059 2780 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:27:21.872960 kubelet[2780]: I0913 00:27:21.872072 2780 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:27:21.872960 kubelet[2780]: I0913 00:27:21.872093 2780 policy_none.go:49] "None policy: Start" Sep 13 00:27:21.873258 kubelet[2780]: I0913 00:27:21.873157 2780 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:27:21.873258 kubelet[2780]: I0913 00:27:21.873182 2780 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:27:21.873432 kubelet[2780]: I0913 00:27:21.873345 2780 state_mem.go:75] "Updated machine memory state" Sep 13 00:27:21.875653 kubelet[2780]: I0913 00:27:21.874867 2780 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:27:21.876145 kubelet[2780]: I0913 00:27:21.875793 2780 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:27:21.876145 kubelet[2780]: I0913 00:27:21.875816 2780 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:27:21.877186 kubelet[2780]: I0913 00:27:21.876569 2780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:27:21.984601 kubelet[2780]: I0913 00:27:21.981919 2780 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.992252 kubelet[2780]: I0913 00:27:21.991966 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f08c5c26d31ec12b07d429f7ad9ffd8-ca-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c2bbffc425\" (UID: \"9f08c5c26d31ec12b07d429f7ad9ffd8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.992778 kubelet[2780]: I0913 00:27:21.992756 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f08c5c26d31ec12b07d429f7ad9ffd8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-5-n-c2bbffc425\" (UID: \"9f08c5c26d31ec12b07d429f7ad9ffd8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.993711 kubelet[2780]: I0913 00:27:21.992949 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.993874 kubelet[2780]: I0913 00:27:21.993851 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.993970 kubelet[2780]: I0913 00:27:21.993955 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f08c5c26d31ec12b07d429f7ad9ffd8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-5-n-c2bbffc425\" (UID: \"9f08c5c26d31ec12b07d429f7ad9ffd8\") " pod="kube-system/kube-apiserver-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.994049 kubelet[2780]: I0913 00:27:21.994033 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.994119 kubelet[2780]: I0913 00:27:21.994109 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.994198 kubelet[2780]: I0913 00:27:21.994186 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c67e065725c4fa3e5fd8b20a93ff1f7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-5-n-c2bbffc425\" (UID: \"7c67e065725c4fa3e5fd8b20a93ff1f7\") " pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.995064 kubelet[2780]: I0913 00:27:21.994271 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/77fad0694354b9b3d91ae7e8d9a86782-kubeconfig\") pod \"kube-scheduler-ci-4081-3-5-n-c2bbffc425\" (UID: \"77fad0694354b9b3d91ae7e8d9a86782\") " pod="kube-system/kube-scheduler-ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.995961 kubelet[2780]: I0913 00:27:21.995935 2780 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:21.996090 kubelet[2780]: I0913 00:27:21.996048 2780 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-5-n-c2bbffc425" Sep 13 00:27:22.734433 kubelet[2780]: I0913 00:27:22.734372 2780 apiserver.go:52] "Watching apiserver" Sep 13 00:27:22.788142 kubelet[2780]: I0913 00:27:22.788047 2780 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:27:22.869122 kubelet[2780]: I0913 00:27:22.868738 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-5-n-c2bbffc425" podStartSLOduration=1.868719748 podStartE2EDuration="1.868719748s" podCreationTimestamp="2025-09-13 00:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:27:22.868695106 +0000 UTC m=+1.219440670" watchObservedRunningTime="2025-09-13 00:27:22.868719748 +0000 UTC m=+1.219465312" Sep 13 00:27:22.882915 kubelet[2780]: I0913 00:27:22.882841 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-5-n-c2bbffc425" podStartSLOduration=1.882808702 podStartE2EDuration="1.882808702s" podCreationTimestamp="2025-09-13 00:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:27:22.880740865 +0000 UTC m=+1.231486429" watchObservedRunningTime="2025-09-13 00:27:22.882808702 +0000 UTC m=+1.233554266" Sep 13 00:27:22.895489 kubelet[2780]: I0913 00:27:22.895309 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-5-n-c2bbffc425" podStartSLOduration=1.895291765 podStartE2EDuration="1.895291765s" podCreationTimestamp="2025-09-13 00:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:27:22.895115915 +0000 UTC m=+1.245861479" watchObservedRunningTime="2025-09-13 00:27:22.895291765 +0000 UTC m=+1.246037289" Sep 13 00:27:23.946856 systemd[1]: Started sshd@7-78.46.184.112:22-119.1.156.50:17970.service - OpenSSH per-connection server daemon (119.1.156.50:17970). Sep 13 00:27:25.103423 sshd[2821]: Connection closed by authenticating user root 119.1.156.50 port 17970 [preauth] Sep 13 00:27:25.106894 systemd[1]: sshd@7-78.46.184.112:22-119.1.156.50:17970.service: Deactivated successfully. Sep 13 00:27:25.333013 systemd[1]: Started sshd@8-78.46.184.112:22-119.1.156.50:18845.service - OpenSSH per-connection server daemon (119.1.156.50:18845). Sep 13 00:27:26.453911 sshd[2826]: Connection closed by authenticating user root 119.1.156.50 port 18845 [preauth] Sep 13 00:27:26.455858 systemd[1]: sshd@8-78.46.184.112:22-119.1.156.50:18845.service: Deactivated successfully. Sep 13 00:27:26.689282 systemd[1]: Started sshd@9-78.46.184.112:22-119.1.156.50:19822.service - OpenSSH per-connection server daemon (119.1.156.50:19822). Sep 13 00:27:26.828047 kubelet[2780]: I0913 00:27:26.827499 2780 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:27:26.828400 containerd[1597]: time="2025-09-13T00:27:26.827906382Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:27:26.828980 kubelet[2780]: I0913 00:27:26.828933 2780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:27:27.429966 kubelet[2780]: I0913 00:27:27.429898 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fce1bb2-6402-460e-81ca-2fa549736f9f-xtables-lock\") pod \"kube-proxy-ckvnq\" (UID: \"5fce1bb2-6402-460e-81ca-2fa549736f9f\") " pod="kube-system/kube-proxy-ckvnq" Sep 13 00:27:27.429966 kubelet[2780]: I0913 00:27:27.429957 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdq9j\" (UniqueName: \"kubernetes.io/projected/5fce1bb2-6402-460e-81ca-2fa549736f9f-kube-api-access-gdq9j\") pod \"kube-proxy-ckvnq\" (UID: \"5fce1bb2-6402-460e-81ca-2fa549736f9f\") " pod="kube-system/kube-proxy-ckvnq" Sep 13 00:27:27.430158 kubelet[2780]: I0913 00:27:27.429983 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5fce1bb2-6402-460e-81ca-2fa549736f9f-kube-proxy\") pod \"kube-proxy-ckvnq\" (UID: \"5fce1bb2-6402-460e-81ca-2fa549736f9f\") " pod="kube-system/kube-proxy-ckvnq" Sep 13 00:27:27.430158 kubelet[2780]: I0913 00:27:27.430001 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fce1bb2-6402-460e-81ca-2fa549736f9f-lib-modules\") pod \"kube-proxy-ckvnq\" (UID: \"5fce1bb2-6402-460e-81ca-2fa549736f9f\") " pod="kube-system/kube-proxy-ckvnq" Sep 13 00:27:27.540898 kubelet[2780]: E0913 00:27:27.540854 2780 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:27:27.542139 kubelet[2780]: E0913 00:27:27.541039 2780 projected.go:194] Error preparing data for projected volume kube-api-access-gdq9j for pod kube-system/kube-proxy-ckvnq: configmap "kube-root-ca.crt" not found Sep 13 00:27:27.542139 kubelet[2780]: E0913 00:27:27.541120 2780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5fce1bb2-6402-460e-81ca-2fa549736f9f-kube-api-access-gdq9j podName:5fce1bb2-6402-460e-81ca-2fa549736f9f nodeName:}" failed. No retries permitted until 2025-09-13 00:27:28.041093006 +0000 UTC m=+6.391838570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gdq9j" (UniqueName: "kubernetes.io/projected/5fce1bb2-6402-460e-81ca-2fa549736f9f-kube-api-access-gdq9j") pod "kube-proxy-ckvnq" (UID: "5fce1bb2-6402-460e-81ca-2fa549736f9f") : configmap "kube-root-ca.crt" not found Sep 13 00:27:27.822338 sshd[2831]: Connection closed by authenticating user root 119.1.156.50 port 19822 [preauth] Sep 13 00:27:27.825344 systemd[1]: sshd@9-78.46.184.112:22-119.1.156.50:19822.service: Deactivated successfully. Sep 13 00:27:28.034787 kubelet[2780]: I0913 00:27:28.034711 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcxtj\" (UniqueName: \"kubernetes.io/projected/e3f8bd90-06aa-4f28-8581-2837e94ee4ee-kube-api-access-wcxtj\") pod \"tigera-operator-58fc44c59b-tgphr\" (UID: \"e3f8bd90-06aa-4f28-8581-2837e94ee4ee\") " pod="tigera-operator/tigera-operator-58fc44c59b-tgphr" Sep 13 00:27:28.034787 kubelet[2780]: I0913 00:27:28.034803 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3f8bd90-06aa-4f28-8581-2837e94ee4ee-var-lib-calico\") pod \"tigera-operator-58fc44c59b-tgphr\" (UID: \"e3f8bd90-06aa-4f28-8581-2837e94ee4ee\") " pod="tigera-operator/tigera-operator-58fc44c59b-tgphr" Sep 13 00:27:28.057068 systemd[1]: Started sshd@10-78.46.184.112:22-119.1.156.50:20858.service - OpenSSH per-connection server daemon (119.1.156.50:20858). Sep 13 00:27:28.279218 containerd[1597]: time="2025-09-13T00:27:28.279158468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-tgphr,Uid:e3f8bd90-06aa-4f28-8581-2837e94ee4ee,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:27:28.314354 containerd[1597]: time="2025-09-13T00:27:28.313730550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:28.314354 containerd[1597]: time="2025-09-13T00:27:28.313895758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:28.314354 containerd[1597]: time="2025-09-13T00:27:28.313910559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:28.314354 containerd[1597]: time="2025-09-13T00:27:28.314043765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:28.315291 containerd[1597]: time="2025-09-13T00:27:28.314855086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ckvnq,Uid:5fce1bb2-6402-460e-81ca-2fa549736f9f,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:28.341732 systemd[1]: Started sshd@11-78.46.184.112:22-193.46.255.244:47988.service - OpenSSH per-connection server daemon (193.46.255.244:47988). Sep 13 00:27:28.349743 containerd[1597]: time="2025-09-13T00:27:28.349608536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:28.349743 containerd[1597]: time="2025-09-13T00:27:28.349678100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:28.349743 containerd[1597]: time="2025-09-13T00:27:28.349690340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:28.350301 containerd[1597]: time="2025-09-13T00:27:28.350150883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:28.407530 containerd[1597]: time="2025-09-13T00:27:28.407358692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-tgphr,Uid:e3f8bd90-06aa-4f28-8581-2837e94ee4ee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"877964b18c196be587c77b2006b9df9076348237012eb5419bef47d0b3cbfbfb\"" Sep 13 00:27:28.413252 containerd[1597]: time="2025-09-13T00:27:28.412979012Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:27:28.421314 containerd[1597]: time="2025-09-13T00:27:28.421271025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ckvnq,Uid:5fce1bb2-6402-460e-81ca-2fa549736f9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e96a52d3169bb73c08913c6f6a9b475fea3859c2bfa6a50b0e462605913eecc\"" Sep 13 00:27:28.426655 containerd[1597]: time="2025-09-13T00:27:28.426331877Z" level=info msg="CreateContainer within sandbox \"2e96a52d3169bb73c08913c6f6a9b475fea3859c2bfa6a50b0e462605913eecc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:27:28.440479 containerd[1597]: time="2025-09-13T00:27:28.440408898Z" level=info msg="CreateContainer within sandbox \"2e96a52d3169bb73c08913c6f6a9b475fea3859c2bfa6a50b0e462605913eecc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"580094f43c62df904c64a22343b457df528f37c302c1fe262d9a91f9779b6e99\"" Sep 13 00:27:28.441293 containerd[1597]: time="2025-09-13T00:27:28.441249740Z" level=info msg="StartContainer for \"580094f43c62df904c64a22343b457df528f37c302c1fe262d9a91f9779b6e99\"" Sep 13 00:27:28.516050 containerd[1597]: time="2025-09-13T00:27:28.515880976Z" level=info msg="StartContainer for \"580094f43c62df904c64a22343b457df528f37c302c1fe262d9a91f9779b6e99\" returns successfully" Sep 13 00:27:28.575101 sshd[2874]: Received disconnect from 193.46.255.244 port 47988:11: [preauth] Sep 13 00:27:28.575101 sshd[2874]: Disconnected from 193.46.255.244 port 47988 [preauth] Sep 13 00:27:28.573512 systemd[1]: sshd@11-78.46.184.112:22-193.46.255.244:47988.service: Deactivated successfully. Sep 13 00:27:29.201033 sshd[2837]: Connection closed by authenticating user root 119.1.156.50 port 20858 [preauth] Sep 13 00:27:29.204735 systemd[1]: sshd@10-78.46.184.112:22-119.1.156.50:20858.service: Deactivated successfully. Sep 13 00:27:29.433963 systemd[1]: Started sshd@12-78.46.184.112:22-119.1.156.50:22608.service - OpenSSH per-connection server daemon (119.1.156.50:22608). Sep 13 00:27:30.345672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285438368.mount: Deactivated successfully. Sep 13 00:27:30.552830 sshd[3093]: Connection closed by authenticating user root 119.1.156.50 port 22608 [preauth] Sep 13 00:27:30.557354 systemd[1]: sshd@12-78.46.184.112:22-119.1.156.50:22608.service: Deactivated successfully. Sep 13 00:27:30.766728 containerd[1597]: time="2025-09-13T00:27:30.766616670Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:30.767339 containerd[1597]: time="2025-09-13T00:27:30.766882483Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 13 00:27:30.769843 containerd[1597]: time="2025-09-13T00:27:30.767605758Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:30.774600 containerd[1597]: time="2025-09-13T00:27:30.773885420Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:30.775290 containerd[1597]: time="2025-09-13T00:27:30.775228365Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.36219975s" Sep 13 00:27:30.775290 containerd[1597]: time="2025-09-13T00:27:30.775286527Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:27:30.777739 systemd[1]: Started sshd@13-78.46.184.112:22-119.1.156.50:23330.service - OpenSSH per-connection server daemon (119.1.156.50:23330). Sep 13 00:27:30.783421 containerd[1597]: time="2025-09-13T00:27:30.783171027Z" level=info msg="CreateContainer within sandbox \"877964b18c196be587c77b2006b9df9076348237012eb5419bef47d0b3cbfbfb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:27:30.802486 containerd[1597]: time="2025-09-13T00:27:30.802171261Z" level=info msg="CreateContainer within sandbox \"877964b18c196be587c77b2006b9df9076348237012eb5419bef47d0b3cbfbfb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"99d9c3cb4c9721dd32a2b0cadd6deed91dfa286b1b5b315e1281af128e0226b9\"" Sep 13 00:27:30.806211 containerd[1597]: time="2025-09-13T00:27:30.805043359Z" level=info msg="StartContainer for \"99d9c3cb4c9721dd32a2b0cadd6deed91dfa286b1b5b315e1281af128e0226b9\"" Sep 13 00:27:30.884842 containerd[1597]: time="2025-09-13T00:27:30.884776356Z" level=info msg="StartContainer for \"99d9c3cb4c9721dd32a2b0cadd6deed91dfa286b1b5b315e1281af128e0226b9\" returns successfully" Sep 13 00:27:31.894116 sshd[3106]: Connection closed by authenticating user root 119.1.156.50 port 23330 [preauth] Sep 13 00:27:31.897896 systemd[1]: sshd@13-78.46.184.112:22-119.1.156.50:23330.service: Deactivated successfully. Sep 13 00:27:31.901452 kubelet[2780]: I0913 00:27:31.898656 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ckvnq" podStartSLOduration=4.898628822 podStartE2EDuration="4.898628822s" podCreationTimestamp="2025-09-13 00:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:27:28.881613789 +0000 UTC m=+7.232359313" watchObservedRunningTime="2025-09-13 00:27:31.898628822 +0000 UTC m=+10.249374386" Sep 13 00:27:32.133083 systemd[1]: Started sshd@14-78.46.184.112:22-119.1.156.50:24318.service - OpenSSH per-connection server daemon (119.1.156.50:24318). Sep 13 00:27:33.284605 sshd[3147]: Connection closed by authenticating user root 119.1.156.50 port 24318 [preauth] Sep 13 00:27:33.295130 systemd[1]: sshd@14-78.46.184.112:22-119.1.156.50:24318.service: Deactivated successfully. Sep 13 00:27:33.309258 kubelet[2780]: I0913 00:27:33.309087 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-tgphr" podStartSLOduration=3.942324191 podStartE2EDuration="6.309067723s" podCreationTimestamp="2025-09-13 00:27:27 +0000 UTC" firstStartedPulling="2025-09-13 00:27:28.411039155 +0000 UTC m=+6.761784719" lastFinishedPulling="2025-09-13 00:27:30.777782687 +0000 UTC m=+9.128528251" observedRunningTime="2025-09-13 00:27:31.899144006 +0000 UTC m=+10.249889610" watchObservedRunningTime="2025-09-13 00:27:33.309067723 +0000 UTC m=+11.659813287" Sep 13 00:27:33.512449 systemd[1]: Started sshd@15-78.46.184.112:22-119.1.156.50:25537.service - OpenSSH per-connection server daemon (119.1.156.50:25537). Sep 13 00:27:34.633263 sshd[3152]: Connection closed by authenticating user root 119.1.156.50 port 25537 [preauth] Sep 13 00:27:34.638027 systemd[1]: sshd@15-78.46.184.112:22-119.1.156.50:25537.service: Deactivated successfully. Sep 13 00:27:34.881576 systemd[1]: Started sshd@16-78.46.184.112:22-119.1.156.50:26849.service - OpenSSH per-connection server daemon (119.1.156.50:26849). Sep 13 00:27:36.066391 sshd[3183]: Connection closed by authenticating user root 119.1.156.50 port 26849 [preauth] Sep 13 00:27:36.073569 systemd[1]: sshd@16-78.46.184.112:22-119.1.156.50:26849.service: Deactivated successfully. Sep 13 00:27:36.301175 systemd[1]: Started sshd@17-78.46.184.112:22-119.1.156.50:27717.service - OpenSSH per-connection server daemon (119.1.156.50:27717). Sep 13 00:27:37.480496 sshd[3188]: Connection closed by authenticating user root 119.1.156.50 port 27717 [preauth] Sep 13 00:27:37.485234 systemd[1]: sshd@17-78.46.184.112:22-119.1.156.50:27717.service: Deactivated successfully. Sep 13 00:27:37.579339 sudo[1869]: pam_unix(sudo:session): session closed for user root Sep 13 00:27:37.702288 systemd[1]: Started sshd@18-78.46.184.112:22-119.1.156.50:29145.service - OpenSSH per-connection server daemon (119.1.156.50:29145). Sep 13 00:27:37.739211 sshd[1865]: pam_unix(sshd:session): session closed for user core Sep 13 00:27:37.745671 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:27:37.747286 systemd[1]: sshd@6-78.46.184.112:22-147.75.109.163:55266.service: Deactivated successfully. Sep 13 00:27:37.756107 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:27:37.759714 systemd-logind[1566]: Removed session 7. Sep 13 00:27:38.828980 sshd[3209]: Connection closed by authenticating user root 119.1.156.50 port 29145 [preauth] Sep 13 00:27:38.834356 systemd[1]: sshd@18-78.46.184.112:22-119.1.156.50:29145.service: Deactivated successfully. Sep 13 00:27:39.069818 systemd[1]: Started sshd@19-78.46.184.112:22-119.1.156.50:30065.service - OpenSSH per-connection server daemon (119.1.156.50:30065). Sep 13 00:27:40.236739 sshd[3218]: Connection closed by authenticating user root 119.1.156.50 port 30065 [preauth] Sep 13 00:27:40.249187 systemd[1]: sshd@19-78.46.184.112:22-119.1.156.50:30065.service: Deactivated successfully. Sep 13 00:27:40.454864 systemd[1]: Started sshd@20-78.46.184.112:22-119.1.156.50:30905.service - OpenSSH per-connection server daemon (119.1.156.50:30905). Sep 13 00:27:41.596881 sshd[3227]: Connection closed by authenticating user root 119.1.156.50 port 30905 [preauth] Sep 13 00:27:41.602209 systemd[1]: sshd@20-78.46.184.112:22-119.1.156.50:30905.service: Deactivated successfully. Sep 13 00:27:41.837974 systemd[1]: Started sshd@21-78.46.184.112:22-119.1.156.50:32310.service - OpenSSH per-connection server daemon (119.1.156.50:32310). Sep 13 00:27:42.982371 sshd[3232]: Connection closed by authenticating user root 119.1.156.50 port 32310 [preauth] Sep 13 00:27:42.989978 systemd[1]: sshd@21-78.46.184.112:22-119.1.156.50:32310.service: Deactivated successfully. Sep 13 00:27:43.204815 systemd[1]: Started sshd@22-78.46.184.112:22-119.1.156.50:33753.service - OpenSSH per-connection server daemon (119.1.156.50:33753). Sep 13 00:27:44.047443 kubelet[2780]: I0913 00:27:44.043196 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d982c41-f51b-4f0f-a031-e1fa36ccaaf9-tigera-ca-bundle\") pod \"calico-typha-785fd67d54-k5kbh\" (UID: \"4d982c41-f51b-4f0f-a031-e1fa36ccaaf9\") " pod="calico-system/calico-typha-785fd67d54-k5kbh" Sep 13 00:27:44.047443 kubelet[2780]: I0913 00:27:44.043245 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4d982c41-f51b-4f0f-a031-e1fa36ccaaf9-typha-certs\") pod \"calico-typha-785fd67d54-k5kbh\" (UID: \"4d982c41-f51b-4f0f-a031-e1fa36ccaaf9\") " pod="calico-system/calico-typha-785fd67d54-k5kbh" Sep 13 00:27:44.047443 kubelet[2780]: I0913 00:27:44.043267 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hrfq\" (UniqueName: \"kubernetes.io/projected/4d982c41-f51b-4f0f-a031-e1fa36ccaaf9-kube-api-access-6hrfq\") pod \"calico-typha-785fd67d54-k5kbh\" (UID: \"4d982c41-f51b-4f0f-a031-e1fa36ccaaf9\") " pod="calico-system/calico-typha-785fd67d54-k5kbh" Sep 13 00:27:44.144156 kubelet[2780]: I0913 00:27:44.143872 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-node-certs\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.145738 kubelet[2780]: I0913 00:27:44.145644 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59xjm\" (UniqueName: \"kubernetes.io/projected/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-kube-api-access-59xjm\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.145944 kubelet[2780]: I0913 00:27:44.145929 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-xtables-lock\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.146139 kubelet[2780]: I0913 00:27:44.146078 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-var-run-calico\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.147715 kubelet[2780]: I0913 00:27:44.146626 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-flexvol-driver-host\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.147970 kubelet[2780]: I0913 00:27:44.147951 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-tigera-ca-bundle\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.151041 kubelet[2780]: I0913 00:27:44.150919 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-cni-net-dir\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.151398 kubelet[2780]: I0913 00:27:44.151382 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-lib-modules\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.151718 kubelet[2780]: I0913 00:27:44.151697 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-cni-log-dir\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.151960 kubelet[2780]: I0913 00:27:44.151904 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-policysync\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.152160 kubelet[2780]: I0913 00:27:44.152145 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-cni-bin-dir\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.152553 kubelet[2780]: I0913 00:27:44.152320 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b44ff2aa-4019-4e78-a0ac-40ddeca4890b-var-lib-calico\") pod \"calico-node-spqct\" (UID: \"b44ff2aa-4019-4e78-a0ac-40ddeca4890b\") " pod="calico-system/calico-node-spqct" Sep 13 00:27:44.183797 kubelet[2780]: E0913 00:27:44.183736 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcpvc" podUID="58a81445-bd68-46b0-ad45-3005b2bad0b3" Sep 13 00:27:44.253121 kubelet[2780]: I0913 00:27:44.253063 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58a81445-bd68-46b0-ad45-3005b2bad0b3-socket-dir\") pod \"csi-node-driver-zcpvc\" (UID: \"58a81445-bd68-46b0-ad45-3005b2bad0b3\") " pod="calico-system/csi-node-driver-zcpvc" Sep 13 00:27:44.253121 kubelet[2780]: I0913 00:27:44.253117 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzttd\" (UniqueName: \"kubernetes.io/projected/58a81445-bd68-46b0-ad45-3005b2bad0b3-kube-api-access-dzttd\") pod \"csi-node-driver-zcpvc\" (UID: \"58a81445-bd68-46b0-ad45-3005b2bad0b3\") " pod="calico-system/csi-node-driver-zcpvc" Sep 13 00:27:44.253297 kubelet[2780]: I0913 00:27:44.253151 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58a81445-bd68-46b0-ad45-3005b2bad0b3-kubelet-dir\") pod \"csi-node-driver-zcpvc\" (UID: \"58a81445-bd68-46b0-ad45-3005b2bad0b3\") " pod="calico-system/csi-node-driver-zcpvc" Sep 13 00:27:44.253297 kubelet[2780]: I0913 00:27:44.253203 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58a81445-bd68-46b0-ad45-3005b2bad0b3-registration-dir\") pod \"csi-node-driver-zcpvc\" (UID: \"58a81445-bd68-46b0-ad45-3005b2bad0b3\") " pod="calico-system/csi-node-driver-zcpvc" Sep 13 00:27:44.253297 kubelet[2780]: I0913 00:27:44.253244 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/58a81445-bd68-46b0-ad45-3005b2bad0b3-varrun\") pod \"csi-node-driver-zcpvc\" (UID: \"58a81445-bd68-46b0-ad45-3005b2bad0b3\") " pod="calico-system/csi-node-driver-zcpvc" Sep 13 00:27:44.259477 kubelet[2780]: E0913 00:27:44.258598 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.259477 kubelet[2780]: W0913 00:27:44.258626 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.259477 kubelet[2780]: E0913 00:27:44.258672 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.263256 kubelet[2780]: E0913 00:27:44.262831 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.263256 kubelet[2780]: W0913 00:27:44.262855 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.263256 kubelet[2780]: E0913 00:27:44.262880 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.268513 kubelet[2780]: E0913 00:27:44.268444 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.268737 kubelet[2780]: W0913 00:27:44.268666 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.268737 kubelet[2780]: E0913 00:27:44.268696 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.272548 kubelet[2780]: E0913 00:27:44.270527 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.272548 kubelet[2780]: W0913 00:27:44.271862 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.272548 kubelet[2780]: E0913 00:27:44.272285 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.272938 kubelet[2780]: E0913 00:27:44.272904 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.273562 kubelet[2780]: W0913 00:27:44.273537 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.274449 kubelet[2780]: E0913 00:27:44.274424 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.277611 kubelet[2780]: E0913 00:27:44.276591 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.277611 kubelet[2780]: W0913 00:27:44.276614 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.277611 kubelet[2780]: E0913 00:27:44.276638 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.277611 kubelet[2780]: E0913 00:27:44.276882 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.277611 kubelet[2780]: W0913 00:27:44.276890 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.277611 kubelet[2780]: E0913 00:27:44.276899 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.277611 kubelet[2780]: E0913 00:27:44.277032 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.277611 kubelet[2780]: W0913 00:27:44.277039 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.277611 kubelet[2780]: E0913 00:27:44.277049 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.277611 kubelet[2780]: E0913 00:27:44.277232 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.277920 kubelet[2780]: W0913 00:27:44.277240 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.277920 kubelet[2780]: E0913 00:27:44.277252 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.303738 kubelet[2780]: E0913 00:27:44.303574 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.303738 kubelet[2780]: W0913 00:27:44.303597 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.303738 kubelet[2780]: E0913 00:27:44.303626 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.317562 sshd[3237]: Connection closed by authenticating user root 119.1.156.50 port 33753 [preauth] Sep 13 00:27:44.325763 systemd[1]: sshd@22-78.46.184.112:22-119.1.156.50:33753.service: Deactivated successfully. Sep 13 00:27:44.354046 kubelet[2780]: E0913 00:27:44.354012 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.354229 containerd[1597]: time="2025-09-13T00:27:44.354181652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-spqct,Uid:b44ff2aa-4019-4e78-a0ac-40ddeca4890b,Namespace:calico-system,Attempt:0,}" Sep 13 00:27:44.354624 kubelet[2780]: W0913 00:27:44.354203 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.354911 kubelet[2780]: E0913 00:27:44.354686 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.355621 kubelet[2780]: E0913 00:27:44.355500 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.355621 kubelet[2780]: W0913 00:27:44.355518 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.355621 kubelet[2780]: E0913 00:27:44.355569 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.356104 kubelet[2780]: E0913 00:27:44.356022 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.356104 kubelet[2780]: W0913 00:27:44.356086 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.357229 kubelet[2780]: E0913 00:27:44.357105 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.357818 kubelet[2780]: E0913 00:27:44.357720 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.357818 kubelet[2780]: W0913 00:27:44.357735 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.358042 kubelet[2780]: E0913 00:27:44.357923 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.358251 kubelet[2780]: E0913 00:27:44.358239 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.358328 kubelet[2780]: W0913 00:27:44.358317 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.358538 kubelet[2780]: E0913 00:27:44.358510 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.358993 kubelet[2780]: E0913 00:27:44.358922 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.358993 kubelet[2780]: W0913 00:27:44.358936 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.359185 kubelet[2780]: E0913 00:27:44.359079 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.359987 kubelet[2780]: E0913 00:27:44.359883 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.359987 kubelet[2780]: W0913 00:27:44.359898 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.359987 kubelet[2780]: E0913 00:27:44.359936 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.361080 kubelet[2780]: E0913 00:27:44.360641 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.361080 kubelet[2780]: W0913 00:27:44.360653 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.361080 kubelet[2780]: E0913 00:27:44.360688 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.361080 kubelet[2780]: E0913 00:27:44.360966 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.361080 kubelet[2780]: W0913 00:27:44.360976 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.361364 kubelet[2780]: E0913 00:27:44.361319 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.362070 kubelet[2780]: E0913 00:27:44.361933 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.362070 kubelet[2780]: W0913 00:27:44.361944 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.362420 kubelet[2780]: E0913 00:27:44.362239 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.362576 kubelet[2780]: E0913 00:27:44.362563 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.362638 kubelet[2780]: W0913 00:27:44.362627 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.362954 kubelet[2780]: E0913 00:27:44.362768 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.363589 kubelet[2780]: E0913 00:27:44.363434 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.363589 kubelet[2780]: W0913 00:27:44.363449 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.363931 kubelet[2780]: E0913 00:27:44.363716 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.365441 kubelet[2780]: E0913 00:27:44.364209 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.365657 kubelet[2780]: W0913 00:27:44.365564 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.365899 kubelet[2780]: E0913 00:27:44.365749 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.366144 kubelet[2780]: E0913 00:27:44.366129 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.366210 kubelet[2780]: W0913 00:27:44.366199 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.366324 kubelet[2780]: E0913 00:27:44.366301 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.366788 kubelet[2780]: E0913 00:27:44.366685 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.366788 kubelet[2780]: W0913 00:27:44.366700 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.366946 kubelet[2780]: E0913 00:27:44.366850 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.367161 kubelet[2780]: E0913 00:27:44.367067 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.367161 kubelet[2780]: W0913 00:27:44.367079 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.367520 kubelet[2780]: E0913 00:27:44.367298 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.374650 kubelet[2780]: E0913 00:27:44.374611 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.374650 kubelet[2780]: W0913 00:27:44.374642 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.375729 kubelet[2780]: E0913 00:27:44.374816 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.378703 kubelet[2780]: E0913 00:27:44.378240 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.378703 kubelet[2780]: W0913 00:27:44.378262 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.380087 kubelet[2780]: E0913 00:27:44.379934 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.382671 kubelet[2780]: E0913 00:27:44.381650 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.382671 kubelet[2780]: W0913 00:27:44.381675 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.382671 kubelet[2780]: E0913 00:27:44.381776 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.382671 kubelet[2780]: E0913 00:27:44.382145 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.382671 kubelet[2780]: W0913 00:27:44.382160 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.382671 kubelet[2780]: E0913 00:27:44.382366 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.382671 kubelet[2780]: E0913 00:27:44.382641 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.382671 kubelet[2780]: W0913 00:27:44.382651 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.383177 kubelet[2780]: E0913 00:27:44.383101 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.385356 kubelet[2780]: E0913 00:27:44.385314 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.385678 kubelet[2780]: W0913 00:27:44.385482 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.386108 kubelet[2780]: E0913 00:27:44.385794 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.386480 kubelet[2780]: E0913 00:27:44.386375 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.386480 kubelet[2780]: W0913 00:27:44.386394 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.387065 kubelet[2780]: E0913 00:27:44.386919 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.388819 kubelet[2780]: E0913 00:27:44.388692 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.388819 kubelet[2780]: W0913 00:27:44.388713 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.389868 kubelet[2780]: E0913 00:27:44.389400 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.390656 kubelet[2780]: E0913 00:27:44.390637 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.390889 kubelet[2780]: W0913 00:27:44.390806 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.390889 kubelet[2780]: E0913 00:27:44.390832 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.402308 containerd[1597]: time="2025-09-13T00:27:44.401984575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:44.402308 containerd[1597]: time="2025-09-13T00:27:44.402150380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:44.402308 containerd[1597]: time="2025-09-13T00:27:44.402163181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:44.402308 containerd[1597]: time="2025-09-13T00:27:44.403690710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:44.410405 kubelet[2780]: E0913 00:27:44.410373 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:27:44.411133 kubelet[2780]: W0913 00:27:44.411107 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:27:44.411218 kubelet[2780]: E0913 00:27:44.411206 2780 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:27:44.468519 containerd[1597]: time="2025-09-13T00:27:44.468440476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785fd67d54-k5kbh,Uid:4d982c41-f51b-4f0f-a031-e1fa36ccaaf9,Namespace:calico-system,Attempt:0,}" Sep 13 00:27:44.470641 containerd[1597]: time="2025-09-13T00:27:44.470598385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-spqct,Uid:b44ff2aa-4019-4e78-a0ac-40ddeca4890b,Namespace:calico-system,Attempt:0,} returns sandbox id \"96269e0b14867163967f5e0b5664048fb2b0894532c787e73b26744f550121a3\"" Sep 13 00:27:44.474156 containerd[1597]: time="2025-09-13T00:27:44.474078017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:27:44.531508 containerd[1597]: time="2025-09-13T00:27:44.531241218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:27:44.531508 containerd[1597]: time="2025-09-13T00:27:44.531301780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:27:44.533629 containerd[1597]: time="2025-09-13T00:27:44.532219930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:44.534773 containerd[1597]: time="2025-09-13T00:27:44.534394400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:27:44.557761 systemd[1]: Started sshd@23-78.46.184.112:22-119.1.156.50:34863.service - OpenSSH per-connection server daemon (119.1.156.50:34863). Sep 13 00:27:44.661556 containerd[1597]: time="2025-09-13T00:27:44.661403091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-785fd67d54-k5kbh,Uid:4d982c41-f51b-4f0f-a031-e1fa36ccaaf9,Namespace:calico-system,Attempt:0,} returns sandbox id \"0046a57f08719da9cc19060b7d3c8c3d1d88b68f22c7b17712edd31599a1d244\"" Sep 13 00:27:45.691605 sshd[3353]: Connection closed by authenticating user root 119.1.156.50 port 34863 [preauth] Sep 13 00:27:45.694879 systemd[1]: sshd@23-78.46.184.112:22-119.1.156.50:34863.service: Deactivated successfully. Sep 13 00:27:45.813447 kubelet[2780]: E0913 00:27:45.813377 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcpvc" podUID="58a81445-bd68-46b0-ad45-3005b2bad0b3" Sep 13 00:27:45.918923 systemd[1]: Started sshd@24-78.46.184.112:22-119.1.156.50:35734.service - OpenSSH per-connection server daemon (119.1.156.50:35734). Sep 13 00:27:45.994450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248511744.mount: Deactivated successfully. Sep 13 00:27:46.150690 containerd[1597]: time="2025-09-13T00:27:46.150375531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:46.153308 containerd[1597]: time="2025-09-13T00:27:46.153198459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 13 00:27:46.155410 containerd[1597]: time="2025-09-13T00:27:46.155317694Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:46.159172 containerd[1597]: time="2025-09-13T00:27:46.159082544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:46.160764 containerd[1597]: time="2025-09-13T00:27:46.160580964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.686256499s" Sep 13 00:27:46.160764 containerd[1597]: time="2025-09-13T00:27:46.160640002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:27:46.163187 containerd[1597]: time="2025-09-13T00:27:46.163115223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:27:46.164779 containerd[1597]: time="2025-09-13T00:27:46.164699919Z" level=info msg="CreateContainer within sandbox \"96269e0b14867163967f5e0b5664048fb2b0894532c787e73b26744f550121a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:27:46.187620 containerd[1597]: time="2025-09-13T00:27:46.187576766Z" level=info msg="CreateContainer within sandbox \"96269e0b14867163967f5e0b5664048fb2b0894532c787e73b26744f550121a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7cac51b75c3fdfbf1d4ac282addeefec416c7f4ac364819f84d41903be21a485\"" Sep 13 00:27:46.190478 containerd[1597]: time="2025-09-13T00:27:46.190405733Z" level=info msg="StartContainer for \"7cac51b75c3fdfbf1d4ac282addeefec416c7f4ac364819f84d41903be21a485\"" Sep 13 00:27:46.266383 containerd[1597]: time="2025-09-13T00:27:46.266186467Z" level=info msg="StartContainer for \"7cac51b75c3fdfbf1d4ac282addeefec416c7f4ac364819f84d41903be21a485\" returns successfully" Sep 13 00:27:46.323355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cac51b75c3fdfbf1d4ac282addeefec416c7f4ac364819f84d41903be21a485-rootfs.mount: Deactivated successfully. Sep 13 00:27:46.383423 containerd[1597]: time="2025-09-13T00:27:46.383331110Z" level=info msg="shim disconnected" id=7cac51b75c3fdfbf1d4ac282addeefec416c7f4ac364819f84d41903be21a485 namespace=k8s.io Sep 13 00:27:46.383423 containerd[1597]: time="2025-09-13T00:27:46.383397868Z" level=warning msg="cleaning up after shim disconnected" id=7cac51b75c3fdfbf1d4ac282addeefec416c7f4ac364819f84d41903be21a485 namespace=k8s.io Sep 13 00:27:46.383423 containerd[1597]: time="2025-09-13T00:27:46.383409587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:27:47.048699 sshd[3380]: Connection closed by authenticating user root 119.1.156.50 port 35734 [preauth] Sep 13 00:27:47.053192 systemd[1]: sshd@24-78.46.184.112:22-119.1.156.50:35734.service: Deactivated successfully. Sep 13 00:27:47.272879 systemd[1]: Started sshd@25-78.46.184.112:22-119.1.156.50:36962.service - OpenSSH per-connection server daemon (119.1.156.50:36962). Sep 13 00:27:47.810801 kubelet[2780]: E0913 00:27:47.810535 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcpvc" podUID="58a81445-bd68-46b0-ad45-3005b2bad0b3" Sep 13 00:27:48.385788 sshd[3459]: Connection closed by authenticating user root 119.1.156.50 port 36962 [preauth] Sep 13 00:27:48.391776 systemd[1]: sshd@25-78.46.184.112:22-119.1.156.50:36962.service: Deactivated successfully. Sep 13 00:27:48.614944 systemd[1]: Started sshd@26-78.46.184.112:22-119.1.156.50:37984.service - OpenSSH per-connection server daemon (119.1.156.50:37984). Sep 13 00:27:48.653627 containerd[1597]: time="2025-09-13T00:27:48.651905363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:48.654589 containerd[1597]: time="2025-09-13T00:27:48.654534669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=31736396" Sep 13 00:27:48.655100 containerd[1597]: time="2025-09-13T00:27:48.655038251Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:48.659777 containerd[1597]: time="2025-09-13T00:27:48.659724083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:48.660973 containerd[1597]: time="2025-09-13T00:27:48.660927040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.4977545s" Sep 13 00:27:48.661146 containerd[1597]: time="2025-09-13T00:27:48.661123193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:27:48.664627 containerd[1597]: time="2025-09-13T00:27:48.664576310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:27:48.686754 containerd[1597]: time="2025-09-13T00:27:48.686701798Z" level=info msg="CreateContainer within sandbox \"0046a57f08719da9cc19060b7d3c8c3d1d88b68f22c7b17712edd31599a1d244\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:27:48.709556 containerd[1597]: time="2025-09-13T00:27:48.709283190Z" level=info msg="CreateContainer within sandbox \"0046a57f08719da9cc19060b7d3c8c3d1d88b68f22c7b17712edd31599a1d244\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4f5db4579b6e609a84c94a7e2f90412af0eedb5c4abad0cca02b5e9c74dc2ff6\"" Sep 13 00:27:48.710448 containerd[1597]: time="2025-09-13T00:27:48.710402510Z" level=info msg="StartContainer for \"4f5db4579b6e609a84c94a7e2f90412af0eedb5c4abad0cca02b5e9c74dc2ff6\"" Sep 13 00:27:48.802705 containerd[1597]: time="2025-09-13T00:27:48.802205704Z" level=info msg="StartContainer for \"4f5db4579b6e609a84c94a7e2f90412af0eedb5c4abad0cca02b5e9c74dc2ff6\" returns successfully" Sep 13 00:27:49.743885 sshd[3468]: Connection closed by authenticating user root 119.1.156.50 port 37984 [preauth] Sep 13 00:27:49.747112 systemd[1]: sshd@26-78.46.184.112:22-119.1.156.50:37984.service: Deactivated successfully. Sep 13 00:27:49.811906 kubelet[2780]: E0913 00:27:49.810207 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcpvc" podUID="58a81445-bd68-46b0-ad45-3005b2bad0b3" Sep 13 00:27:49.952878 kubelet[2780]: I0913 00:27:49.952322 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:27:49.979737 systemd[1]: Started sshd@27-78.46.184.112:22-119.1.156.50:39292.service - OpenSSH per-connection server daemon (119.1.156.50:39292). Sep 13 00:27:51.127899 sshd[3515]: Connection closed by authenticating user root 119.1.156.50 port 39292 [preauth] Sep 13 00:27:51.136326 systemd[1]: sshd@27-78.46.184.112:22-119.1.156.50:39292.service: Deactivated successfully. Sep 13 00:27:51.344942 systemd[1]: Started sshd@28-78.46.184.112:22-119.1.156.50:40103.service - OpenSSH per-connection server daemon (119.1.156.50:40103). Sep 13 00:27:51.814102 kubelet[2780]: E0913 00:27:51.813913 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcpvc" podUID="58a81445-bd68-46b0-ad45-3005b2bad0b3" Sep 13 00:27:52.446724 sshd[3520]: Connection closed by authenticating user root 119.1.156.50 port 40103 [preauth] Sep 13 00:27:52.449911 systemd[1]: sshd@28-78.46.184.112:22-119.1.156.50:40103.service: Deactivated successfully. Sep 13 00:27:52.591752 containerd[1597]: time="2025-09-13T00:27:52.591029437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:52.591752 containerd[1597]: time="2025-09-13T00:27:52.591546702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 00:27:52.592362 containerd[1597]: time="2025-09-13T00:27:52.592137846Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:52.598150 containerd[1597]: time="2025-09-13T00:27:52.598085637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:27:52.599006 containerd[1597]: time="2025-09-13T00:27:52.598948253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.934231068s" Sep 13 00:27:52.599006 containerd[1597]: time="2025-09-13T00:27:52.599000692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:27:52.604792 containerd[1597]: time="2025-09-13T00:27:52.604714570Z" level=info msg="CreateContainer within sandbox \"96269e0b14867163967f5e0b5664048fb2b0894532c787e73b26744f550121a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:27:52.627160 containerd[1597]: time="2025-09-13T00:27:52.626951581Z" level=info msg="CreateContainer within sandbox \"96269e0b14867163967f5e0b5664048fb2b0894532c787e73b26744f550121a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eca9160c51489ae0672bd434e1914649233dbfb6d95c83d2baf0560641308771\"" Sep 13 00:27:52.628814 containerd[1597]: time="2025-09-13T00:27:52.628743411Z" level=info msg="StartContainer for \"eca9160c51489ae0672bd434e1914649233dbfb6d95c83d2baf0560641308771\"" Sep 13 00:27:52.667248 systemd[1]: run-containerd-runc-k8s.io-eca9160c51489ae0672bd434e1914649233dbfb6d95c83d2baf0560641308771-runc.LQ8Lqn.mount: Deactivated successfully. Sep 13 00:27:52.688796 systemd[1]: Started sshd@29-78.46.184.112:22-119.1.156.50:41034.service - OpenSSH per-connection server daemon (119.1.156.50:41034). Sep 13 00:27:52.712300 containerd[1597]: time="2025-09-13T00:27:52.712245610Z" level=info msg="StartContainer for \"eca9160c51489ae0672bd434e1914649233dbfb6d95c83d2baf0560641308771\" returns successfully" Sep 13 00:27:52.994947 kubelet[2780]: I0913 00:27:52.993036 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-785fd67d54-k5kbh" podStartSLOduration=5.995695692 podStartE2EDuration="9.993015553s" podCreationTimestamp="2025-09-13 00:27:43 +0000 UTC" firstStartedPulling="2025-09-13 00:27:44.665032168 +0000 UTC m=+23.015777732" lastFinishedPulling="2025-09-13 00:27:48.662352029 +0000 UTC m=+27.013097593" observedRunningTime="2025-09-13 00:27:48.986452869 +0000 UTC m=+27.337198433" watchObservedRunningTime="2025-09-13 00:27:52.993015553 +0000 UTC m=+31.343761117" Sep 13 00:27:53.341677 containerd[1597]: time="2025-09-13T00:27:53.341552091Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:27:53.400431 kubelet[2780]: I0913 00:27:53.400389 2780 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:27:53.454906 containerd[1597]: time="2025-09-13T00:27:53.454384897Z" level=info msg="shim disconnected" id=eca9160c51489ae0672bd434e1914649233dbfb6d95c83d2baf0560641308771 namespace=k8s.io Sep 13 00:27:53.454906 containerd[1597]: time="2025-09-13T00:27:53.454469414Z" level=warning msg="cleaning up after shim disconnected" id=eca9160c51489ae0672bd434e1914649233dbfb6d95c83d2baf0560641308771 namespace=k8s.io Sep 13 00:27:53.454906 containerd[1597]: time="2025-09-13T00:27:53.454746247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:27:53.455323 kubelet[2780]: W0913 00:27:53.454675 2780 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-5-n-c2bbffc425" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-5-n-c2bbffc425' and this object Sep 13 00:27:53.455323 kubelet[2780]: E0913 00:27:53.454717 2780 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-5-n-c2bbffc425\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-5-n-c2bbffc425' and this object" logger="UnhandledError" Sep 13 00:27:53.541348 kubelet[2780]: I0913 00:27:53.541281 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tppvf\" (UniqueName: \"kubernetes.io/projected/39b89f9a-86c8-4e11-a841-da097dc790e6-kube-api-access-tppvf\") pod \"goldmane-7988f88666-gt72g\" (UID: \"39b89f9a-86c8-4e11-a841-da097dc790e6\") " pod="calico-system/goldmane-7988f88666-gt72g" Sep 13 00:27:53.541348 kubelet[2780]: I0913 00:27:53.541330 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4fr\" (UniqueName: \"kubernetes.io/projected/fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8-kube-api-access-xv4fr\") pod \"coredns-7c65d6cfc9-5d7t9\" (UID: \"fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8\") " pod="kube-system/coredns-7c65d6cfc9-5d7t9" Sep 13 00:27:53.541348 kubelet[2780]: I0913 00:27:53.541352 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687nn\" (UniqueName: \"kubernetes.io/projected/abb1d385-d781-4636-ae7d-8e9e046fdbb2-kube-api-access-687nn\") pod \"calico-apiserver-84657f8b5b-tzhb7\" (UID: \"abb1d385-d781-4636-ae7d-8e9e046fdbb2\") " pod="calico-apiserver/calico-apiserver-84657f8b5b-tzhb7" Sep 13 00:27:53.541897 kubelet[2780]: I0913 00:27:53.541369 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abb1d385-d781-4636-ae7d-8e9e046fdbb2-calico-apiserver-certs\") pod \"calico-apiserver-84657f8b5b-tzhb7\" (UID: \"abb1d385-d781-4636-ae7d-8e9e046fdbb2\") " pod="calico-apiserver/calico-apiserver-84657f8b5b-tzhb7" Sep 13 00:27:53.541897 kubelet[2780]: I0913 00:27:53.541384 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b89f9a-86c8-4e11-a841-da097dc790e6-goldmane-ca-bundle\") pod \"goldmane-7988f88666-gt72g\" (UID: \"39b89f9a-86c8-4e11-a841-da097dc790e6\") " pod="calico-system/goldmane-7988f88666-gt72g" Sep 13 00:27:53.541897 kubelet[2780]: I0913 00:27:53.541402 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2c5940f-c316-4c69-88fc-dec3ef9a910d-whisker-ca-bundle\") pod \"whisker-784cfb86b9-clzw2\" (UID: \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\") " pod="calico-system/whisker-784cfb86b9-clzw2" Sep 13 00:27:53.541897 kubelet[2780]: I0913 00:27:53.541418 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4856d853-be72-4277-97ae-7eb5ab571384-tigera-ca-bundle\") pod \"calico-kube-controllers-686d78856d-wr74j\" (UID: \"4856d853-be72-4277-97ae-7eb5ab571384\") " pod="calico-system/calico-kube-controllers-686d78856d-wr74j" Sep 13 00:27:53.541897 kubelet[2780]: I0913 00:27:53.541438 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qm6\" (UniqueName: \"kubernetes.io/projected/4856d853-be72-4277-97ae-7eb5ab571384-kube-api-access-x6qm6\") pod \"calico-kube-controllers-686d78856d-wr74j\" (UID: \"4856d853-be72-4277-97ae-7eb5ab571384\") " pod="calico-system/calico-kube-controllers-686d78856d-wr74j" Sep 13 00:27:53.542176 kubelet[2780]: I0913 00:27:53.541466 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07a817c1-95b0-4d38-86e3-6bb11e5dacbe-config-volume\") pod \"coredns-7c65d6cfc9-45289\" (UID: \"07a817c1-95b0-4d38-86e3-6bb11e5dacbe\") " pod="kube-system/coredns-7c65d6cfc9-45289" Sep 13 00:27:53.542176 kubelet[2780]: I0913 00:27:53.541489 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f40c21ca-4f97-49b3-b97b-d3d765668104-calico-apiserver-certs\") pod \"calico-apiserver-84657f8b5b-zj9qr\" (UID: \"f40c21ca-4f97-49b3-b97b-d3d765668104\") " pod="calico-apiserver/calico-apiserver-84657f8b5b-zj9qr" Sep 13 00:27:53.542176 kubelet[2780]: I0913 00:27:53.541505 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqll\" (UniqueName: \"kubernetes.io/projected/07a817c1-95b0-4d38-86e3-6bb11e5dacbe-kube-api-access-6rqll\") pod \"coredns-7c65d6cfc9-45289\" (UID: \"07a817c1-95b0-4d38-86e3-6bb11e5dacbe\") " pod="kube-system/coredns-7c65d6cfc9-45289" Sep 13 00:27:53.542176 kubelet[2780]: I0913 00:27:53.541521 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np9xc\" (UniqueName: \"kubernetes.io/projected/f2c5940f-c316-4c69-88fc-dec3ef9a910d-kube-api-access-np9xc\") pod \"whisker-784cfb86b9-clzw2\" (UID: \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\") " pod="calico-system/whisker-784cfb86b9-clzw2" Sep 13 00:27:53.542176 kubelet[2780]: I0913 00:27:53.541536 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8-config-volume\") pod \"coredns-7c65d6cfc9-5d7t9\" (UID: \"fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8\") " pod="kube-system/coredns-7c65d6cfc9-5d7t9" Sep 13 00:27:53.542343 kubelet[2780]: I0913 00:27:53.541552 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b59m\" (UniqueName: \"kubernetes.io/projected/f40c21ca-4f97-49b3-b97b-d3d765668104-kube-api-access-2b59m\") pod \"calico-apiserver-84657f8b5b-zj9qr\" (UID: \"f40c21ca-4f97-49b3-b97b-d3d765668104\") " pod="calico-apiserver/calico-apiserver-84657f8b5b-zj9qr" Sep 13 00:27:53.542343 kubelet[2780]: I0913 00:27:53.541567 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2c5940f-c316-4c69-88fc-dec3ef9a910d-whisker-backend-key-pair\") pod \"whisker-784cfb86b9-clzw2\" (UID: \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\") " pod="calico-system/whisker-784cfb86b9-clzw2" Sep 13 00:27:53.542343 kubelet[2780]: I0913 00:27:53.541610 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b89f9a-86c8-4e11-a841-da097dc790e6-config\") pod \"goldmane-7988f88666-gt72g\" (UID: \"39b89f9a-86c8-4e11-a841-da097dc790e6\") " pod="calico-system/goldmane-7988f88666-gt72g" Sep 13 00:27:53.542343 kubelet[2780]: I0913 00:27:53.541629 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/39b89f9a-86c8-4e11-a841-da097dc790e6-goldmane-key-pair\") pod \"goldmane-7988f88666-gt72g\" (UID: \"39b89f9a-86c8-4e11-a841-da097dc790e6\") " pod="calico-system/goldmane-7988f88666-gt72g" Sep 13 00:27:53.621970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eca9160c51489ae0672bd434e1914649233dbfb6d95c83d2baf0560641308771-rootfs.mount: Deactivated successfully. Sep 13 00:27:53.809151 containerd[1597]: time="2025-09-13T00:27:53.808377024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686d78856d-wr74j,Uid:4856d853-be72-4277-97ae-7eb5ab571384,Namespace:calico-system,Attempt:0,}" Sep 13 00:27:53.809151 containerd[1597]: time="2025-09-13T00:27:53.808753614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gt72g,Uid:39b89f9a-86c8-4e11-a841-da097dc790e6,Namespace:calico-system,Attempt:0,}" Sep 13 00:27:53.810021 containerd[1597]: time="2025-09-13T00:27:53.809381517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84657f8b5b-zj9qr,Uid:f40c21ca-4f97-49b3-b97b-d3d765668104,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:27:53.815080 containerd[1597]: time="2025-09-13T00:27:53.814107232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84657f8b5b-tzhb7,Uid:abb1d385-d781-4636-ae7d-8e9e046fdbb2,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:27:53.815725 containerd[1597]: time="2025-09-13T00:27:53.815690110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784cfb86b9-clzw2,Uid:f2c5940f-c316-4c69-88fc-dec3ef9a910d,Namespace:calico-system,Attempt:0,}" Sep 13 00:27:53.826273 containerd[1597]: time="2025-09-13T00:27:53.825630486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zcpvc,Uid:58a81445-bd68-46b0-ad45-3005b2bad0b3,Namespace:calico-system,Attempt:0,}" Sep 13 00:27:53.863593 sshd[3557]: Connection closed by authenticating user root 119.1.156.50 port 41034 [preauth] Sep 13 00:27:53.868832 systemd[1]: sshd@29-78.46.184.112:22-119.1.156.50:41034.service: Deactivated successfully. Sep 13 00:27:53.992519 containerd[1597]: time="2025-09-13T00:27:53.992327903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:27:54.048948 containerd[1597]: time="2025-09-13T00:27:54.048889082Z" level=error msg="Failed to destroy network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.050439 containerd[1597]: time="2025-09-13T00:27:54.050364686Z" level=error msg="encountered an error cleaning up failed sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.050746 containerd[1597]: time="2025-09-13T00:27:54.050532521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686d78856d-wr74j,Uid:4856d853-be72-4277-97ae-7eb5ab571384,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.052630 kubelet[2780]: E0913 00:27:54.051998 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.052630 kubelet[2780]: E0913 00:27:54.052086 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-686d78856d-wr74j" Sep 13 00:27:54.052630 kubelet[2780]: E0913 00:27:54.052107 2780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-686d78856d-wr74j" Sep 13 00:27:54.053204 kubelet[2780]: E0913 00:27:54.052160 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-686d78856d-wr74j_calico-system(4856d853-be72-4277-97ae-7eb5ab571384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-686d78856d-wr74j_calico-system(4856d853-be72-4277-97ae-7eb5ab571384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-686d78856d-wr74j" podUID="4856d853-be72-4277-97ae-7eb5ab571384" Sep 13 00:27:54.092140 systemd[1]: Started sshd@30-78.46.184.112:22-119.1.156.50:42080.service - OpenSSH per-connection server daemon (119.1.156.50:42080). Sep 13 00:27:54.112435 containerd[1597]: time="2025-09-13T00:27:54.112382384Z" level=error msg="Failed to destroy network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.113942 containerd[1597]: time="2025-09-13T00:27:54.113877627Z" level=error msg="Failed to destroy network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.117052 containerd[1597]: time="2025-09-13T00:27:54.116947431Z" level=error msg="encountered an error cleaning up failed sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.117154 containerd[1597]: time="2025-09-13T00:27:54.117070948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84657f8b5b-zj9qr,Uid:f40c21ca-4f97-49b3-b97b-d3d765668104,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.117379 kubelet[2780]: E0913 00:27:54.117335 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.117532 kubelet[2780]: E0913 00:27:54.117400 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84657f8b5b-zj9qr" Sep 13 00:27:54.117532 kubelet[2780]: E0913 00:27:54.117423 2780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84657f8b5b-zj9qr" Sep 13 00:27:54.117532 kubelet[2780]: E0913 00:27:54.117499 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84657f8b5b-zj9qr_calico-apiserver(f40c21ca-4f97-49b3-b97b-d3d765668104)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84657f8b5b-zj9qr_calico-apiserver(f40c21ca-4f97-49b3-b97b-d3d765668104)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84657f8b5b-zj9qr" podUID="f40c21ca-4f97-49b3-b97b-d3d765668104" Sep 13 00:27:54.118694 containerd[1597]: time="2025-09-13T00:27:54.118647629Z" level=error msg="encountered an error cleaning up failed sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.119767 containerd[1597]: time="2025-09-13T00:27:54.118819984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zcpvc,Uid:58a81445-bd68-46b0-ad45-3005b2bad0b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.119852 kubelet[2780]: E0913 00:27:54.119063 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.119852 kubelet[2780]: E0913 00:27:54.119118 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zcpvc" Sep 13 00:27:54.119852 kubelet[2780]: E0913 00:27:54.119151 2780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zcpvc" Sep 13 00:27:54.119940 kubelet[2780]: E0913 00:27:54.119188 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zcpvc_calico-system(58a81445-bd68-46b0-ad45-3005b2bad0b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zcpvc_calico-system(58a81445-bd68-46b0-ad45-3005b2bad0b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zcpvc" podUID="58a81445-bd68-46b0-ad45-3005b2bad0b3" Sep 13 00:27:54.141073 containerd[1597]: time="2025-09-13T00:27:54.140991033Z" level=error msg="Failed to destroy network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.143537 containerd[1597]: time="2025-09-13T00:27:54.143127220Z" level=error msg="encountered an error cleaning up failed sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.143676 containerd[1597]: time="2025-09-13T00:27:54.143470332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784cfb86b9-clzw2,Uid:f2c5940f-c316-4c69-88fc-dec3ef9a910d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.144427 kubelet[2780]: E0913 00:27:54.143882 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.144427 kubelet[2780]: E0913 00:27:54.143940 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-784cfb86b9-clzw2" Sep 13 00:27:54.144427 kubelet[2780]: E0913 00:27:54.143969 2780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-784cfb86b9-clzw2" Sep 13 00:27:54.146243 kubelet[2780]: E0913 00:27:54.144023 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-784cfb86b9-clzw2_calico-system(f2c5940f-c316-4c69-88fc-dec3ef9a910d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-784cfb86b9-clzw2_calico-system(f2c5940f-c316-4c69-88fc-dec3ef9a910d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-784cfb86b9-clzw2" podUID="f2c5940f-c316-4c69-88fc-dec3ef9a910d" Sep 13 00:27:54.147818 containerd[1597]: time="2025-09-13T00:27:54.147668547Z" level=error msg="Failed to destroy network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.148989 containerd[1597]: time="2025-09-13T00:27:54.148819479Z" level=error msg="encountered an error cleaning up failed sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.149170 containerd[1597]: time="2025-09-13T00:27:54.149134871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84657f8b5b-tzhb7,Uid:abb1d385-d781-4636-ae7d-8e9e046fdbb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.150750 kubelet[2780]: E0913 00:27:54.150520 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.150750 kubelet[2780]: E0913 00:27:54.150585 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84657f8b5b-tzhb7" Sep 13 00:27:54.150750 kubelet[2780]: E0913 00:27:54.150608 2780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84657f8b5b-tzhb7" Sep 13 00:27:54.150913 kubelet[2780]: E0913 00:27:54.150658 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84657f8b5b-tzhb7_calico-apiserver(abb1d385-d781-4636-ae7d-8e9e046fdbb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84657f8b5b-tzhb7_calico-apiserver(abb1d385-d781-4636-ae7d-8e9e046fdbb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84657f8b5b-tzhb7" podUID="abb1d385-d781-4636-ae7d-8e9e046fdbb2" Sep 13 00:27:54.152856 containerd[1597]: time="2025-09-13T00:27:54.152805020Z" level=error msg="Failed to destroy network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.153516 containerd[1597]: time="2025-09-13T00:27:54.153469083Z" level=error msg="encountered an error cleaning up failed sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.153573 containerd[1597]: time="2025-09-13T00:27:54.153556161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gt72g,Uid:39b89f9a-86c8-4e11-a841-da097dc790e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.153785 kubelet[2780]: E0913 00:27:54.153755 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.154271 kubelet[2780]: E0913 00:27:54.154138 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gt72g" Sep 13 00:27:54.154271 kubelet[2780]: E0913 00:27:54.154169 2780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-gt72g" Sep 13 00:27:54.154271 kubelet[2780]: E0913 00:27:54.154230 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-gt72g_calico-system(39b89f9a-86c8-4e11-a841-da097dc790e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-gt72g_calico-system(39b89f9a-86c8-4e11-a841-da097dc790e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gt72g" podUID="39b89f9a-86c8-4e11-a841-da097dc790e6" Sep 13 00:27:54.354989 containerd[1597]: time="2025-09-13T00:27:54.354683243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5d7t9,Uid:fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:54.402359 containerd[1597]: time="2025-09-13T00:27:54.402314299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-45289,Uid:07a817c1-95b0-4d38-86e3-6bb11e5dacbe,Namespace:kube-system,Attempt:0,}" Sep 13 00:27:54.441560 containerd[1597]: time="2025-09-13T00:27:54.441411367Z" level=error msg="Failed to destroy network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.442935 containerd[1597]: time="2025-09-13T00:27:54.442128749Z" level=error msg="encountered an error cleaning up failed sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.442935 containerd[1597]: time="2025-09-13T00:27:54.442188828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5d7t9,Uid:fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.443165 kubelet[2780]: E0913 00:27:54.442434 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.443165 kubelet[2780]: E0913 00:27:54.442552 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5d7t9" Sep 13 00:27:54.443165 kubelet[2780]: E0913 00:27:54.442571 2780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-5d7t9" Sep 13 00:27:54.443317 kubelet[2780]: E0913 00:27:54.442621 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-5d7t9_kube-system(fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-5d7t9_kube-system(fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5d7t9" podUID="fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8" Sep 13 00:27:54.487152 containerd[1597]: time="2025-09-13T00:27:54.486810959Z" level=error msg="Failed to destroy network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.487548 containerd[1597]: time="2025-09-13T00:27:54.487490742Z" level=error msg="encountered an error cleaning up failed sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.487766 containerd[1597]: time="2025-09-13T00:27:54.487632219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-45289,Uid:07a817c1-95b0-4d38-86e3-6bb11e5dacbe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.487953 kubelet[2780]: E0913 00:27:54.487904 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:54.488027 kubelet[2780]: E0913 00:27:54.487979 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-45289" Sep 13 00:27:54.488027 kubelet[2780]: E0913 00:27:54.488005 2780 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-45289" Sep 13 00:27:54.488161 kubelet[2780]: E0913 00:27:54.488049 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-45289_kube-system(07a817c1-95b0-4d38-86e3-6bb11e5dacbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-45289_kube-system(07a817c1-95b0-4d38-86e3-6bb11e5dacbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-45289" podUID="07a817c1-95b0-4d38-86e3-6bb11e5dacbe" Sep 13 00:27:54.990113 kubelet[2780]: I0913 00:27:54.989880 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:27:54.992281 containerd[1597]: time="2025-09-13T00:27:54.992136041Z" level=info msg="StopPodSandbox for \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\"" Sep 13 00:27:54.993181 containerd[1597]: time="2025-09-13T00:27:54.992626709Z" level=info msg="Ensure that sandbox 3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e in task-service has been cleanup successfully" Sep 13 00:27:54.994792 kubelet[2780]: I0913 00:27:54.994281 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:27:54.998271 containerd[1597]: time="2025-09-13T00:27:54.997340152Z" level=info msg="StopPodSandbox for \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\"" Sep 13 00:27:54.998599 kubelet[2780]: I0913 00:27:54.998506 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:27:54.999035 containerd[1597]: time="2025-09-13T00:27:54.998985311Z" level=info msg="Ensure that sandbox 9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2 in task-service has been cleanup successfully" Sep 13 00:27:55.000234 containerd[1597]: time="2025-09-13T00:27:55.000184241Z" level=info msg="StopPodSandbox for \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\"" Sep 13 00:27:55.001116 containerd[1597]: time="2025-09-13T00:27:55.000608430Z" level=info msg="Ensure that sandbox 02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81 in task-service has been cleanup successfully" Sep 13 00:27:55.004363 kubelet[2780]: I0913 00:27:55.004309 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:27:55.008424 containerd[1597]: time="2025-09-13T00:27:55.006884043Z" level=info msg="StopPodSandbox for \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\"" Sep 13 00:27:55.008424 containerd[1597]: time="2025-09-13T00:27:55.007131677Z" level=info msg="Ensure that sandbox 3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a in task-service has been cleanup successfully" Sep 13 00:27:55.014171 kubelet[2780]: I0913 00:27:55.014016 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:27:55.017635 containerd[1597]: time="2025-09-13T00:27:55.017593354Z" level=info msg="StopPodSandbox for \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\"" Sep 13 00:27:55.018631 containerd[1597]: time="2025-09-13T00:27:55.018584291Z" level=info msg="Ensure that sandbox 438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867 in task-service has been cleanup successfully" Sep 13 00:27:55.023087 kubelet[2780]: I0913 00:27:55.022920 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:27:55.026728 containerd[1597]: time="2025-09-13T00:27:55.026674704Z" level=info msg="StopPodSandbox for \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\"" Sep 13 00:27:55.028469 kubelet[2780]: I0913 00:27:55.027388 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:27:55.029204 containerd[1597]: time="2025-09-13T00:27:55.029005729Z" level=info msg="Ensure that sandbox 58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2 in task-service has been cleanup successfully" Sep 13 00:27:55.031944 containerd[1597]: time="2025-09-13T00:27:55.031877223Z" level=info msg="StopPodSandbox for \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\"" Sep 13 00:27:55.035449 containerd[1597]: time="2025-09-13T00:27:55.034126051Z" level=info msg="Ensure that sandbox 4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6 in task-service has been cleanup successfully" Sep 13 00:27:55.047479 kubelet[2780]: I0913 00:27:55.047359 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:27:55.049902 containerd[1597]: time="2025-09-13T00:27:55.049738728Z" level=info msg="StopPodSandbox for \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\"" Sep 13 00:27:55.051922 containerd[1597]: time="2025-09-13T00:27:55.051681563Z" level=info msg="Ensure that sandbox dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b in task-service has been cleanup successfully" Sep 13 00:27:55.159198 containerd[1597]: time="2025-09-13T00:27:55.159045750Z" level=error msg="StopPodSandbox for \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\" failed" error="failed to destroy network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:55.159809 kubelet[2780]: E0913 00:27:55.159444 2780 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:27:55.159809 kubelet[2780]: E0913 00:27:55.159563 2780 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867"} Sep 13 00:27:55.159809 kubelet[2780]: E0913 00:27:55.159639 2780 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:27:55.159809 kubelet[2780]: E0913 00:27:55.159662 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-784cfb86b9-clzw2" podUID="f2c5940f-c316-4c69-88fc-dec3ef9a910d" Sep 13 00:27:55.161757 containerd[1597]: time="2025-09-13T00:27:55.161560331Z" level=error msg="StopPodSandbox for \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\" failed" error="failed to destroy network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:55.163484 kubelet[2780]: E0913 00:27:55.162114 2780 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:27:55.163484 kubelet[2780]: E0913 00:27:55.162177 2780 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2"} Sep 13 00:27:55.163484 kubelet[2780]: E0913 00:27:55.162211 2780 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39b89f9a-86c8-4e11-a841-da097dc790e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:27:55.163484 kubelet[2780]: E0913 00:27:55.162232 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39b89f9a-86c8-4e11-a841-da097dc790e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-gt72g" podUID="39b89f9a-86c8-4e11-a841-da097dc790e6" Sep 13 00:27:55.165106 containerd[1597]: time="2025-09-13T00:27:55.165059130Z" level=error msg="StopPodSandbox for \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\" failed" error="failed to destroy network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:55.165533 kubelet[2780]: E0913 00:27:55.165488 2780 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:27:55.165600 kubelet[2780]: E0913 00:27:55.165551 2780 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81"} Sep 13 00:27:55.165600 kubelet[2780]: E0913 00:27:55.165587 2780 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4856d853-be72-4277-97ae-7eb5ab571384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:27:55.165705 kubelet[2780]: E0913 00:27:55.165612 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4856d853-be72-4277-97ae-7eb5ab571384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-686d78856d-wr74j" podUID="4856d853-be72-4277-97ae-7eb5ab571384" Sep 13 00:27:55.169580 containerd[1597]: time="2025-09-13T00:27:55.169526226Z" level=error msg="StopPodSandbox for \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\" failed" error="failed to destroy network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:55.170042 containerd[1597]: time="2025-09-13T00:27:55.170008215Z" level=error msg="StopPodSandbox for \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\" failed" error="failed to destroy network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:55.170721 kubelet[2780]: E0913 00:27:55.170666 2780 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:27:55.170830 kubelet[2780]: E0913 00:27:55.170734 2780 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2"} Sep 13 00:27:55.170830 kubelet[2780]: E0913 00:27:55.170773 2780 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abb1d385-d781-4636-ae7d-8e9e046fdbb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:27:55.170830 kubelet[2780]: E0913 00:27:55.170800 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abb1d385-d781-4636-ae7d-8e9e046fdbb2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84657f8b5b-tzhb7" podUID="abb1d385-d781-4636-ae7d-8e9e046fdbb2" Sep 13 00:27:55.171022 kubelet[2780]: E0913 00:27:55.170859 2780 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:27:55.171022 kubelet[2780]: E0913 00:27:55.170874 2780 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a"} Sep 13 00:27:55.171022 kubelet[2780]: E0913 00:27:55.170890 2780 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07a817c1-95b0-4d38-86e3-6bb11e5dacbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:27:55.171022 kubelet[2780]: E0913 00:27:55.170906 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07a817c1-95b0-4d38-86e3-6bb11e5dacbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-45289" podUID="07a817c1-95b0-4d38-86e3-6bb11e5dacbe" Sep 13 00:27:55.174129 containerd[1597]: time="2025-09-13T00:27:55.174021402Z" level=error msg="StopPodSandbox for \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\" failed" error="failed to destroy network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:55.174402 kubelet[2780]: E0913 00:27:55.174316 2780 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:27:55.174402 kubelet[2780]: E0913 00:27:55.174392 2780 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e"} Sep 13 00:27:55.174530 kubelet[2780]: E0913 00:27:55.174434 2780 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f40c21ca-4f97-49b3-b97b-d3d765668104\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:27:55.174530 kubelet[2780]: E0913 00:27:55.174482 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f40c21ca-4f97-49b3-b97b-d3d765668104\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84657f8b5b-zj9qr" podUID="f40c21ca-4f97-49b3-b97b-d3d765668104" Sep 13 00:27:55.181447 containerd[1597]: time="2025-09-13T00:27:55.181383911Z" level=error msg="StopPodSandbox for \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\" failed" error="failed to destroy network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:55.182831 kubelet[2780]: E0913 00:27:55.182674 2780 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:27:55.183079 kubelet[2780]: E0913 00:27:55.182900 2780 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6"} Sep 13 00:27:55.184705 kubelet[2780]: E0913 00:27:55.183346 2780 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:27:55.184705 kubelet[2780]: E0913 00:27:55.184563 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-5d7t9" podUID="fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8" Sep 13 00:27:55.201978 containerd[1597]: time="2025-09-13T00:27:55.201756558Z" level=error msg="StopPodSandbox for \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\" failed" error="failed to destroy network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:27:55.203060 kubelet[2780]: E0913 00:27:55.202695 2780 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:27:55.203060 kubelet[2780]: E0913 00:27:55.202870 2780 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b"} Sep 13 00:27:55.203060 kubelet[2780]: E0913 00:27:55.202956 2780 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58a81445-bd68-46b0-ad45-3005b2bad0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:27:55.203060 kubelet[2780]: E0913 00:27:55.203006 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58a81445-bd68-46b0-ad45-3005b2bad0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zcpvc" podUID="58a81445-bd68-46b0-ad45-3005b2bad0b3" Sep 13 00:27:55.232047 sshd[3762]: Connection closed by authenticating user root 119.1.156.50 port 42080 [preauth] Sep 13 00:27:55.236355 systemd[1]: sshd@30-78.46.184.112:22-119.1.156.50:42080.service: Deactivated successfully. Sep 13 00:27:55.468923 systemd[1]: Started sshd@31-78.46.184.112:22-119.1.156.50:43989.service - OpenSSH per-connection server daemon (119.1.156.50:43989). Sep 13 00:27:56.622518 sshd[3940]: Connection closed by authenticating user root 119.1.156.50 port 43989 [preauth] Sep 13 00:27:56.626660 systemd[1]: sshd@31-78.46.184.112:22-119.1.156.50:43989.service: Deactivated successfully. Sep 13 00:27:56.830830 systemd[1]: Started sshd@32-78.46.184.112:22-119.1.156.50:44921.service - OpenSSH per-connection server daemon (119.1.156.50:44921). Sep 13 00:27:57.913749 sshd[3945]: Connection closed by authenticating user root 119.1.156.50 port 44921 [preauth] Sep 13 00:27:57.917531 systemd[1]: sshd@32-78.46.184.112:22-119.1.156.50:44921.service: Deactivated successfully. Sep 13 00:27:58.155955 systemd[1]: Started sshd@33-78.46.184.112:22-119.1.156.50:45958.service - OpenSSH per-connection server daemon (119.1.156.50:45958). Sep 13 00:27:59.283320 sshd[3950]: Connection closed by authenticating user root 119.1.156.50 port 45958 [preauth] Sep 13 00:27:59.286893 systemd[1]: sshd@33-78.46.184.112:22-119.1.156.50:45958.service: Deactivated successfully. Sep 13 00:27:59.511126 systemd[1]: Started sshd@34-78.46.184.112:22-119.1.156.50:46728.service - OpenSSH per-connection server daemon (119.1.156.50:46728). Sep 13 00:28:00.240449 kubelet[2780]: I0913 00:28:00.240135 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:28:00.626629 sshd[3961]: Connection closed by authenticating user root 119.1.156.50 port 46728 [preauth] Sep 13 00:28:00.629592 systemd[1]: sshd@34-78.46.184.112:22-119.1.156.50:46728.service: Deactivated successfully. Sep 13 00:28:00.859924 systemd[1]: Started sshd@35-78.46.184.112:22-119.1.156.50:48022.service - OpenSSH per-connection server daemon (119.1.156.50:48022). Sep 13 00:28:01.383041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309197347.mount: Deactivated successfully. Sep 13 00:28:01.418420 containerd[1597]: time="2025-09-13T00:28:01.417377845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:01.419716 containerd[1597]: time="2025-09-13T00:28:01.419675732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 00:28:01.420988 containerd[1597]: time="2025-09-13T00:28:01.420935273Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:01.424144 containerd[1597]: time="2025-09-13T00:28:01.423506556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:01.424498 containerd[1597]: time="2025-09-13T00:28:01.424434423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 7.428121465s" Sep 13 00:28:01.424594 containerd[1597]: time="2025-09-13T00:28:01.424576221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:28:01.442259 containerd[1597]: time="2025-09-13T00:28:01.442214286Z" level=info msg="CreateContainer within sandbox \"96269e0b14867163967f5e0b5664048fb2b0894532c787e73b26744f550121a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:28:01.458862 containerd[1597]: time="2025-09-13T00:28:01.458725567Z" level=info msg="CreateContainer within sandbox \"96269e0b14867163967f5e0b5664048fb2b0894532c787e73b26744f550121a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b004ea11e98ab831035ac43b8396cb21b60d80873e367e2892dbe910902cb68f\"" Sep 13 00:28:01.460887 containerd[1597]: time="2025-09-13T00:28:01.459252519Z" level=info msg="StartContainer for \"b004ea11e98ab831035ac43b8396cb21b60d80873e367e2892dbe910902cb68f\"" Sep 13 00:28:01.527283 containerd[1597]: time="2025-09-13T00:28:01.527185017Z" level=info msg="StartContainer for \"b004ea11e98ab831035ac43b8396cb21b60d80873e367e2892dbe910902cb68f\" returns successfully" Sep 13 00:28:01.679562 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:28:01.679721 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:28:01.839791 containerd[1597]: time="2025-09-13T00:28:01.839705457Z" level=info msg="StopPodSandbox for \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\"" Sep 13 00:28:01.985122 sshd[3968]: Connection closed by authenticating user root 119.1.156.50 port 48022 [preauth] Sep 13 00:28:01.989180 systemd[1]: sshd@35-78.46.184.112:22-119.1.156.50:48022.service: Deactivated successfully. Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:01.966 [INFO][4033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:01.968 [INFO][4033] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" iface="eth0" netns="/var/run/netns/cni-a146249a-6147-7369-be01-c6b29045e63e" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:01.969 [INFO][4033] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" iface="eth0" netns="/var/run/netns/cni-a146249a-6147-7369-be01-c6b29045e63e" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:01.970 [INFO][4033] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" iface="eth0" netns="/var/run/netns/cni-a146249a-6147-7369-be01-c6b29045e63e" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:01.970 [INFO][4033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:01.970 [INFO][4033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:02.057 [INFO][4040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:02.060 [INFO][4040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:02.060 [INFO][4040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:02.079 [WARNING][4040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:02.079 [INFO][4040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:02.084 [INFO][4040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:02.097433 containerd[1597]: 2025-09-13 00:28:02.094 [INFO][4033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:02.099870 containerd[1597]: time="2025-09-13T00:28:02.099565666Z" level=info msg="TearDown network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\" successfully" Sep 13 00:28:02.099870 containerd[1597]: time="2025-09-13T00:28:02.099607506Z" level=info msg="StopPodSandbox for \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\" returns successfully" Sep 13 00:28:02.214155 kubelet[2780]: I0913 00:28:02.213557 2780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2c5940f-c316-4c69-88fc-dec3ef9a910d-whisker-backend-key-pair\") pod \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\" (UID: \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\") " Sep 13 00:28:02.214155 kubelet[2780]: I0913 00:28:02.213632 2780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2c5940f-c316-4c69-88fc-dec3ef9a910d-whisker-ca-bundle\") pod \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\" (UID: \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\") " Sep 13 00:28:02.214155 kubelet[2780]: I0913 00:28:02.213659 2780 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np9xc\" (UniqueName: \"kubernetes.io/projected/f2c5940f-c316-4c69-88fc-dec3ef9a910d-kube-api-access-np9xc\") pod \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\" (UID: \"f2c5940f-c316-4c69-88fc-dec3ef9a910d\") " Sep 13 00:28:02.216165 systemd[1]: Started sshd@36-78.46.184.112:22-119.1.156.50:48972.service - OpenSSH per-connection server daemon (119.1.156.50:48972). Sep 13 00:28:02.221141 kubelet[2780]: I0913 00:28:02.221090 2780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c5940f-c316-4c69-88fc-dec3ef9a910d-kube-api-access-np9xc" (OuterVolumeSpecName: "kube-api-access-np9xc") pod "f2c5940f-c316-4c69-88fc-dec3ef9a910d" (UID: "f2c5940f-c316-4c69-88fc-dec3ef9a910d"). InnerVolumeSpecName "kube-api-access-np9xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:28:02.221853 kubelet[2780]: I0913 00:28:02.221696 2780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c5940f-c316-4c69-88fc-dec3ef9a910d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f2c5940f-c316-4c69-88fc-dec3ef9a910d" (UID: "f2c5940f-c316-4c69-88fc-dec3ef9a910d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:28:02.228935 kubelet[2780]: I0913 00:28:02.228782 2780 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c5940f-c316-4c69-88fc-dec3ef9a910d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f2c5940f-c316-4c69-88fc-dec3ef9a910d" (UID: "f2c5940f-c316-4c69-88fc-dec3ef9a910d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:28:02.314880 kubelet[2780]: I0913 00:28:02.314820 2780 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2c5940f-c316-4c69-88fc-dec3ef9a910d-whisker-ca-bundle\") on node \"ci-4081-3-5-n-c2bbffc425\" DevicePath \"\"" Sep 13 00:28:02.315304 kubelet[2780]: I0913 00:28:02.315219 2780 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np9xc\" (UniqueName: \"kubernetes.io/projected/f2c5940f-c316-4c69-88fc-dec3ef9a910d-kube-api-access-np9xc\") on node \"ci-4081-3-5-n-c2bbffc425\" DevicePath \"\"" Sep 13 00:28:02.315304 kubelet[2780]: I0913 00:28:02.315278 2780 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2c5940f-c316-4c69-88fc-dec3ef9a910d-whisker-backend-key-pair\") on node \"ci-4081-3-5-n-c2bbffc425\" DevicePath \"\"" Sep 13 00:28:02.391643 systemd[1]: run-netns-cni\x2da146249a\x2d6147\x2d7369\x2dbe01\x2dc6b29045e63e.mount: Deactivated successfully. Sep 13 00:28:02.392056 systemd[1]: var-lib-kubelet-pods-f2c5940f\x2dc316\x2d4c69\x2d88fc\x2ddec3ef9a910d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnp9xc.mount: Deactivated successfully. Sep 13 00:28:02.392242 systemd[1]: var-lib-kubelet-pods-f2c5940f\x2dc316\x2d4c69\x2d88fc\x2ddec3ef9a910d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:28:03.106366 kubelet[2780]: I0913 00:28:03.103896 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-spqct" podStartSLOduration=2.151777695 podStartE2EDuration="19.103876262s" podCreationTimestamp="2025-09-13 00:27:44 +0000 UTC" firstStartedPulling="2025-09-13 00:27:44.473509159 +0000 UTC m=+22.824254723" lastFinishedPulling="2025-09-13 00:28:01.425607726 +0000 UTC m=+39.776353290" observedRunningTime="2025-09-13 00:28:02.117044596 +0000 UTC m=+40.467790200" watchObservedRunningTime="2025-09-13 00:28:03.103876262 +0000 UTC m=+41.454621826" Sep 13 00:28:03.222145 kubelet[2780]: I0913 00:28:03.222094 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe174ba-32eb-45fd-a85b-418f37e2057c-whisker-ca-bundle\") pod \"whisker-c79f9869b-qwz99\" (UID: \"cbe174ba-32eb-45fd-a85b-418f37e2057c\") " pod="calico-system/whisker-c79f9869b-qwz99" Sep 13 00:28:03.222145 kubelet[2780]: I0913 00:28:03.222157 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fb8w\" (UniqueName: \"kubernetes.io/projected/cbe174ba-32eb-45fd-a85b-418f37e2057c-kube-api-access-2fb8w\") pod \"whisker-c79f9869b-qwz99\" (UID: \"cbe174ba-32eb-45fd-a85b-418f37e2057c\") " pod="calico-system/whisker-c79f9869b-qwz99" Sep 13 00:28:03.222651 kubelet[2780]: I0913 00:28:03.222181 2780 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbe174ba-32eb-45fd-a85b-418f37e2057c-whisker-backend-key-pair\") pod \"whisker-c79f9869b-qwz99\" (UID: \"cbe174ba-32eb-45fd-a85b-418f37e2057c\") " pod="calico-system/whisker-c79f9869b-qwz99" Sep 13 00:28:03.362950 sshd[4072]: Connection closed by authenticating user root 119.1.156.50 port 48972 [preauth] Sep 13 00:28:03.367780 systemd[1]: sshd@36-78.46.184.112:22-119.1.156.50:48972.service: Deactivated successfully. Sep 13 00:28:03.465008 containerd[1597]: time="2025-09-13T00:28:03.464956887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c79f9869b-qwz99,Uid:cbe174ba-32eb-45fd-a85b-418f37e2057c,Namespace:calico-system,Attempt:0,}" Sep 13 00:28:03.596686 systemd[1]: Started sshd@37-78.46.184.112:22-119.1.156.50:50433.service - OpenSSH per-connection server daemon (119.1.156.50:50433). Sep 13 00:28:03.823589 kubelet[2780]: I0913 00:28:03.823408 2780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c5940f-c316-4c69-88fc-dec3ef9a910d" path="/var/lib/kubelet/pods/f2c5940f-c316-4c69-88fc-dec3ef9a910d/volumes" Sep 13 00:28:03.853132 systemd-networkd[1244]: cali9e88cfe9969: Link UP Sep 13 00:28:03.854136 systemd-networkd[1244]: cali9e88cfe9969: Gained carrier Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.582 [INFO][4200] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.619 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0 whisker-c79f9869b- calico-system cbe174ba-32eb-45fd-a85b-418f37e2057c 918 0 2025-09-13 00:28:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c79f9869b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-5-n-c2bbffc425 whisker-c79f9869b-qwz99 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9e88cfe9969 [] [] }} ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Namespace="calico-system" Pod="whisker-c79f9869b-qwz99" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.619 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Namespace="calico-system" Pod="whisker-c79f9869b-qwz99" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.725 [INFO][4213] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" HandleID="k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.725 [INFO][4213] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" HandleID="k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3b80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-c2bbffc425", "pod":"whisker-c79f9869b-qwz99", "timestamp":"2025-09-13 00:28:03.725495868 +0000 UTC"}, Hostname:"ci-4081-3-5-n-c2bbffc425", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.725 [INFO][4213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.725 [INFO][4213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.726 [INFO][4213] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-c2bbffc425' Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.739 [INFO][4213] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.749 [INFO][4213] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.762 [INFO][4213] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.765 [INFO][4213] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.769 [INFO][4213] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.769 [INFO][4213] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.773 [INFO][4213] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169 Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.790 [INFO][4213] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.798 [INFO][4213] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.65/26] block=192.168.85.64/26 handle="k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.799 [INFO][4213] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.65/26] handle="k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.799 [INFO][4213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:03.892207 containerd[1597]: 2025-09-13 00:28:03.799 [INFO][4213] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.65/26] IPv6=[] ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" HandleID="k8s-pod-network.d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" Sep 13 00:28:03.892928 containerd[1597]: 2025-09-13 00:28:03.813 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Namespace="calico-system" Pod="whisker-c79f9869b-qwz99" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0", GenerateName:"whisker-c79f9869b-", Namespace:"calico-system", SelfLink:"", UID:"cbe174ba-32eb-45fd-a85b-418f37e2057c", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c79f9869b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"", Pod:"whisker-c79f9869b-qwz99", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9e88cfe9969", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:03.892928 containerd[1597]: 2025-09-13 00:28:03.815 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.65/32] ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Namespace="calico-system" Pod="whisker-c79f9869b-qwz99" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" Sep 13 00:28:03.892928 containerd[1597]: 2025-09-13 00:28:03.816 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e88cfe9969 ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Namespace="calico-system" Pod="whisker-c79f9869b-qwz99" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" Sep 13 00:28:03.892928 containerd[1597]: 2025-09-13 00:28:03.855 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Namespace="calico-system" Pod="whisker-c79f9869b-qwz99" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" Sep 13 00:28:03.892928 containerd[1597]: 2025-09-13 00:28:03.856 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Namespace="calico-system" Pod="whisker-c79f9869b-qwz99" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0", GenerateName:"whisker-c79f9869b-", Namespace:"calico-system", SelfLink:"", UID:"cbe174ba-32eb-45fd-a85b-418f37e2057c", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 28, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c79f9869b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169", Pod:"whisker-c79f9869b-qwz99", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9e88cfe9969", MAC:"aa:76:00:f2:78:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:03.892928 containerd[1597]: 2025-09-13 00:28:03.876 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169" Namespace="calico-system" Pod="whisker-c79f9869b-qwz99" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--c79f9869b--qwz99-eth0" Sep 13 00:28:03.942272 containerd[1597]: time="2025-09-13T00:28:03.941353660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:03.942272 containerd[1597]: time="2025-09-13T00:28:03.941821094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:03.942889 containerd[1597]: time="2025-09-13T00:28:03.942522086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:03.942889 containerd[1597]: time="2025-09-13T00:28:03.942729044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:03.981263 systemd[1]: run-containerd-runc-k8s.io-d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169-runc.aAas8n.mount: Deactivated successfully. Sep 13 00:28:03.991556 kernel: bpftool[4293]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:28:04.038483 containerd[1597]: time="2025-09-13T00:28:04.038375231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c79f9869b-qwz99,Uid:cbe174ba-32eb-45fd-a85b-418f37e2057c,Namespace:calico-system,Attempt:0,} returns sandbox id \"d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169\"" Sep 13 00:28:04.044871 containerd[1597]: time="2025-09-13T00:28:04.044826242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:28:04.265736 systemd-networkd[1244]: vxlan.calico: Link UP Sep 13 00:28:04.265752 systemd-networkd[1244]: vxlan.calico: Gained carrier Sep 13 00:28:04.803914 sshd[4211]: Connection closed by authenticating user root 119.1.156.50 port 50433 [preauth] Sep 13 00:28:04.807329 systemd[1]: sshd@37-78.46.184.112:22-119.1.156.50:50433.service: Deactivated successfully. Sep 13 00:28:05.047838 systemd[1]: Started sshd@38-78.46.184.112:22-119.1.156.50:51421.service - OpenSSH per-connection server daemon (119.1.156.50:51421). Sep 13 00:28:05.187791 systemd-networkd[1244]: cali9e88cfe9969: Gained IPv6LL Sep 13 00:28:05.443885 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Sep 13 00:28:05.737480 containerd[1597]: time="2025-09-13T00:28:05.737334695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:05.739006 containerd[1597]: time="2025-09-13T00:28:05.738773481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 00:28:05.740562 containerd[1597]: time="2025-09-13T00:28:05.740210147Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:05.748766 containerd[1597]: time="2025-09-13T00:28:05.748531429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:05.750393 containerd[1597]: time="2025-09-13T00:28:05.750337011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.70541721s" Sep 13 00:28:05.750599 containerd[1597]: time="2025-09-13T00:28:05.750578209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:28:05.775328 containerd[1597]: time="2025-09-13T00:28:05.775223895Z" level=info msg="CreateContainer within sandbox \"d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:28:05.792880 containerd[1597]: time="2025-09-13T00:28:05.792065016Z" level=info msg="CreateContainer within sandbox \"d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"20498178bd2df4648584d68fd5bdebd4ccc650d4b6a0f3bbc602b5cbf148273e\"" Sep 13 00:28:05.793621 containerd[1597]: time="2025-09-13T00:28:05.793552641Z" level=info msg="StartContainer for \"20498178bd2df4648584d68fd5bdebd4ccc650d4b6a0f3bbc602b5cbf148273e\"" Sep 13 00:28:05.867912 containerd[1597]: time="2025-09-13T00:28:05.867864537Z" level=info msg="StartContainer for \"20498178bd2df4648584d68fd5bdebd4ccc650d4b6a0f3bbc602b5cbf148273e\" returns successfully" Sep 13 00:28:05.871496 containerd[1597]: time="2025-09-13T00:28:05.871427703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:28:06.200568 sshd[4394]: Connection closed by authenticating user root 119.1.156.50 port 51421 [preauth] Sep 13 00:28:06.205563 systemd[1]: sshd@38-78.46.184.112:22-119.1.156.50:51421.service: Deactivated successfully. Sep 13 00:28:06.418841 systemd[1]: Started sshd@39-78.46.184.112:22-119.1.156.50:52309.service - OpenSSH per-connection server daemon (119.1.156.50:52309). Sep 13 00:28:06.812577 containerd[1597]: time="2025-09-13T00:28:06.812513029Z" level=info msg="StopPodSandbox for \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\"" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.889 [INFO][4450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.894 [INFO][4450] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" iface="eth0" netns="/var/run/netns/cni-c3d99f2d-06d8-caab-b3c0-533a18af1584" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.900 [INFO][4450] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" iface="eth0" netns="/var/run/netns/cni-c3d99f2d-06d8-caab-b3c0-533a18af1584" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.904 [INFO][4450] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" iface="eth0" netns="/var/run/netns/cni-c3d99f2d-06d8-caab-b3c0-533a18af1584" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.907 [INFO][4450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.908 [INFO][4450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.967 [INFO][4457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.967 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.967 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.978 [WARNING][4457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.978 [INFO][4457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.980 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:06.986435 containerd[1597]: 2025-09-13 00:28:06.984 [INFO][4450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:06.990607 containerd[1597]: time="2025-09-13T00:28:06.986710617Z" level=info msg="TearDown network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\" successfully" Sep 13 00:28:06.990607 containerd[1597]: time="2025-09-13T00:28:06.986772096Z" level=info msg="StopPodSandbox for \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\" returns successfully" Sep 13 00:28:06.993904 systemd[1]: run-netns-cni\x2dc3d99f2d\x2d06d8\x2dcaab\x2db3c0\x2d533a18af1584.mount: Deactivated successfully. Sep 13 00:28:06.994580 containerd[1597]: time="2025-09-13T00:28:06.994543391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zcpvc,Uid:58a81445-bd68-46b0-ad45-3005b2bad0b3,Namespace:calico-system,Attempt:1,}" Sep 13 00:28:07.153210 systemd-networkd[1244]: calif106e6ea51b: Link UP Sep 13 00:28:07.153355 systemd-networkd[1244]: calif106e6ea51b: Gained carrier Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.055 [INFO][4466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0 csi-node-driver- calico-system 58a81445-bd68-46b0-ad45-3005b2bad0b3 937 0 2025-09-13 00:27:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-5-n-c2bbffc425 csi-node-driver-zcpvc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif106e6ea51b [] [] }} ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Namespace="calico-system" Pod="csi-node-driver-zcpvc" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.055 [INFO][4466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Namespace="calico-system" Pod="csi-node-driver-zcpvc" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.086 [INFO][4478] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" HandleID="k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.086 [INFO][4478] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" HandleID="k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-c2bbffc425", "pod":"csi-node-driver-zcpvc", "timestamp":"2025-09-13 00:28:07.086079843 +0000 UTC"}, Hostname:"ci-4081-3-5-n-c2bbffc425", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.086 [INFO][4478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.086 [INFO][4478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.086 [INFO][4478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-c2bbffc425' Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.103 [INFO][4478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.110 [INFO][4478] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.116 [INFO][4478] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.119 [INFO][4478] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.122 [INFO][4478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.122 [INFO][4478] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.125 [INFO][4478] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53 Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.134 [INFO][4478] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.143 [INFO][4478] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.66/26] block=192.168.85.64/26 handle="k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.143 [INFO][4478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.66/26] handle="k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.143 [INFO][4478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:07.177051 containerd[1597]: 2025-09-13 00:28:07.143 [INFO][4478] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.66/26] IPv6=[] ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" HandleID="k8s-pod-network.226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:07.178230 containerd[1597]: 2025-09-13 00:28:07.150 [INFO][4466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Namespace="calico-system" Pod="csi-node-driver-zcpvc" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58a81445-bd68-46b0-ad45-3005b2bad0b3", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"", Pod:"csi-node-driver-zcpvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif106e6ea51b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:07.178230 containerd[1597]: 2025-09-13 00:28:07.150 [INFO][4466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.66/32] ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Namespace="calico-system" Pod="csi-node-driver-zcpvc" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:07.178230 containerd[1597]: 2025-09-13 00:28:07.150 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif106e6ea51b ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Namespace="calico-system" Pod="csi-node-driver-zcpvc" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:07.178230 containerd[1597]: 2025-09-13 00:28:07.153 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Namespace="calico-system" Pod="csi-node-driver-zcpvc" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:07.178230 containerd[1597]: 2025-09-13 00:28:07.153 [INFO][4466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Namespace="calico-system" Pod="csi-node-driver-zcpvc" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58a81445-bd68-46b0-ad45-3005b2bad0b3", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53", Pod:"csi-node-driver-zcpvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif106e6ea51b", MAC:"e2:ae:cb:14:e1:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:07.178230 containerd[1597]: 2025-09-13 00:28:07.174 [INFO][4466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53" Namespace="calico-system" Pod="csi-node-driver-zcpvc" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:07.212408 containerd[1597]: time="2025-09-13T00:28:07.203187357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:07.212408 containerd[1597]: time="2025-09-13T00:28:07.203275996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:07.212408 containerd[1597]: time="2025-09-13T00:28:07.203297276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:07.212408 containerd[1597]: time="2025-09-13T00:28:07.203420555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:07.254250 containerd[1597]: time="2025-09-13T00:28:07.254193708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zcpvc,Uid:58a81445-bd68-46b0-ad45-3005b2bad0b3,Namespace:calico-system,Attempt:1,} returns sandbox id \"226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53\"" Sep 13 00:28:07.520736 sshd[4440]: Connection closed by authenticating user root 119.1.156.50 port 52309 [preauth] Sep 13 00:28:07.521389 systemd[1]: sshd@39-78.46.184.112:22-119.1.156.50:52309.service: Deactivated successfully. Sep 13 00:28:07.760958 systemd[1]: Started sshd@40-78.46.184.112:22-119.1.156.50:53246.service - OpenSSH per-connection server daemon (119.1.156.50:53246). Sep 13 00:28:07.816055 containerd[1597]: time="2025-09-13T00:28:07.815899891Z" level=info msg="StopPodSandbox for \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\"" Sep 13 00:28:07.817286 containerd[1597]: time="2025-09-13T00:28:07.815925571Z" level=info msg="StopPodSandbox for \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\"" Sep 13 00:28:07.817825 containerd[1597]: time="2025-09-13T00:28:07.815958451Z" level=info msg="StopPodSandbox for \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\"" Sep 13 00:28:07.820118 containerd[1597]: time="2025-09-13T00:28:07.815984131Z" level=info msg="StopPodSandbox for \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\"" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:07.932 [INFO][4560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:07.932 [INFO][4560] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" iface="eth0" netns="/var/run/netns/cni-eceb83c8-56cc-b5dd-d61c-449496bd40af" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:07.937 [INFO][4560] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" iface="eth0" netns="/var/run/netns/cni-eceb83c8-56cc-b5dd-d61c-449496bd40af" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:07.937 [INFO][4560] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" iface="eth0" netns="/var/run/netns/cni-eceb83c8-56cc-b5dd-d61c-449496bd40af" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:07.939 [INFO][4560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:07.940 [INFO][4560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:08.051 [INFO][4599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:08.051 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:08.051 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:08.071 [WARNING][4599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:08.071 [INFO][4599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:08.074 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:08.091054 containerd[1597]: 2025-09-13 00:28:08.083 [INFO][4560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:08.091054 containerd[1597]: time="2025-09-13T00:28:08.087870100Z" level=info msg="TearDown network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\" successfully" Sep 13 00:28:08.091054 containerd[1597]: time="2025-09-13T00:28:08.087909860Z" level=info msg="StopPodSandbox for \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\" returns successfully" Sep 13 00:28:08.094699 systemd[1]: run-netns-cni\x2deceb83c8\x2d56cc\x2db5dd\x2dd61c\x2d449496bd40af.mount: Deactivated successfully. Sep 13 00:28:08.101198 containerd[1597]: time="2025-09-13T00:28:08.101080499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5d7t9,Uid:fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8,Namespace:kube-system,Attempt:1,}" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.000 [INFO][4582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.003 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" iface="eth0" netns="/var/run/netns/cni-57adc11f-4296-bd45-07bf-346c5d4c7d2e" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.004 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" iface="eth0" netns="/var/run/netns/cni-57adc11f-4296-bd45-07bf-346c5d4c7d2e" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.006 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" iface="eth0" netns="/var/run/netns/cni-57adc11f-4296-bd45-07bf-346c5d4c7d2e" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.006 [INFO][4582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.006 [INFO][4582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.050 [INFO][4611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.051 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.074 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.093 [WARNING][4611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.093 [INFO][4611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.102 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:08.129486 containerd[1597]: 2025-09-13 00:28:08.116 [INFO][4582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:08.136091 containerd[1597]: time="2025-09-13T00:28:08.132543986Z" level=info msg="TearDown network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\" successfully" Sep 13 00:28:08.136091 containerd[1597]: time="2025-09-13T00:28:08.132592865Z" level=info msg="StopPodSandbox for \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\" returns successfully" Sep 13 00:28:08.136091 containerd[1597]: time="2025-09-13T00:28:08.135057170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84657f8b5b-zj9qr,Uid:f40c21ca-4f97-49b3-b97b-d3d765668104,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:28:08.138296 systemd[1]: run-netns-cni\x2d57adc11f\x2d4296\x2dbd45\x2d07bf\x2d346c5d4c7d2e.mount: Deactivated successfully. Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:07.996 [INFO][4578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:07.998 [INFO][4578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" iface="eth0" netns="/var/run/netns/cni-c97d75d4-006e-d9ce-4049-4459e5e8e944" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.003 [INFO][4578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" iface="eth0" netns="/var/run/netns/cni-c97d75d4-006e-d9ce-4049-4459e5e8e944" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.004 [INFO][4578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" iface="eth0" netns="/var/run/netns/cni-c97d75d4-006e-d9ce-4049-4459e5e8e944" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.004 [INFO][4578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.004 [INFO][4578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.065 [INFO][4609] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.065 [INFO][4609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.102 [INFO][4609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.128 [WARNING][4609] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.136 [INFO][4609] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.143 [INFO][4609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:08.161485 containerd[1597]: 2025-09-13 00:28:08.154 [INFO][4578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:08.162446 containerd[1597]: time="2025-09-13T00:28:08.162300123Z" level=info msg="TearDown network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\" successfully" Sep 13 00:28:08.162446 containerd[1597]: time="2025-09-13T00:28:08.162365722Z" level=info msg="StopPodSandbox for \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\" returns successfully" Sep 13 00:28:08.165746 containerd[1597]: time="2025-09-13T00:28:08.165379344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gt72g,Uid:39b89f9a-86c8-4e11-a841-da097dc790e6,Namespace:calico-system,Attempt:1,}" Sep 13 00:28:08.168355 systemd[1]: run-netns-cni\x2dc97d75d4\x2d006e\x2dd9ce\x2d4049\x2d4459e5e8e944.mount: Deactivated successfully. Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:07.988 [INFO][4584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:07.996 [INFO][4584] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" iface="eth0" netns="/var/run/netns/cni-0e8fa1d2-03b5-f495-082e-be58ae20c23e" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.000 [INFO][4584] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" iface="eth0" netns="/var/run/netns/cni-0e8fa1d2-03b5-f495-082e-be58ae20c23e" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.002 [INFO][4584] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" iface="eth0" netns="/var/run/netns/cni-0e8fa1d2-03b5-f495-082e-be58ae20c23e" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.002 [INFO][4584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.003 [INFO][4584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.119 [INFO][4612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.120 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.143 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.167 [WARNING][4612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.167 [INFO][4612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.171 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:08.189684 containerd[1597]: 2025-09-13 00:28:08.185 [INFO][4584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:08.192147 containerd[1597]: time="2025-09-13T00:28:08.191907581Z" level=info msg="TearDown network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\" successfully" Sep 13 00:28:08.192147 containerd[1597]: time="2025-09-13T00:28:08.191960901Z" level=info msg="StopPodSandbox for \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\" returns successfully" Sep 13 00:28:08.201639 containerd[1597]: time="2025-09-13T00:28:08.201479522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84657f8b5b-tzhb7,Uid:abb1d385-d781-4636-ae7d-8e9e046fdbb2,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:28:08.462249 systemd-networkd[1244]: calid4ae7dce5ac: Link UP Sep 13 00:28:08.463103 systemd-networkd[1244]: calid4ae7dce5ac: Gained carrier Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.308 [INFO][4665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0 calico-apiserver-84657f8b5b- calico-apiserver abb1d385-d781-4636-ae7d-8e9e046fdbb2 949 0 2025-09-13 00:27:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84657f8b5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-c2bbffc425 calico-apiserver-84657f8b5b-tzhb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid4ae7dce5ac [] [] }} ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-tzhb7" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.308 [INFO][4665] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-tzhb7" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.362 [INFO][4682] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" HandleID="k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.362 [INFO][4682] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" HandleID="k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-c2bbffc425", "pod":"calico-apiserver-84657f8b5b-tzhb7", "timestamp":"2025-09-13 00:28:08.362435173 +0000 UTC"}, Hostname:"ci-4081-3-5-n-c2bbffc425", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.363 [INFO][4682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.363 [INFO][4682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.363 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-c2bbffc425' Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.381 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.393 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.408 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.418 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.425 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.425 [INFO][4682] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.428 [INFO][4682] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3 Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.436 [INFO][4682] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.449 [INFO][4682] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.67/26] block=192.168.85.64/26 handle="k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.449 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.67/26] handle="k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.449 [INFO][4682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:08.513170 containerd[1597]: 2025-09-13 00:28:08.449 [INFO][4682] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.67/26] IPv6=[] ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" HandleID="k8s-pod-network.6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.514370 containerd[1597]: 2025-09-13 00:28:08.458 [INFO][4665] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-tzhb7" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0", GenerateName:"calico-apiserver-84657f8b5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abb1d385-d781-4636-ae7d-8e9e046fdbb2", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84657f8b5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"", Pod:"calico-apiserver-84657f8b5b-tzhb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ae7dce5ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:08.514370 containerd[1597]: 2025-09-13 00:28:08.458 [INFO][4665] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.67/32] ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-tzhb7" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.514370 containerd[1597]: 2025-09-13 00:28:08.459 [INFO][4665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4ae7dce5ac ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-tzhb7" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.514370 containerd[1597]: 2025-09-13 00:28:08.464 [INFO][4665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-tzhb7" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.514370 containerd[1597]: 2025-09-13 00:28:08.465 [INFO][4665] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-tzhb7" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0", GenerateName:"calico-apiserver-84657f8b5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abb1d385-d781-4636-ae7d-8e9e046fdbb2", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84657f8b5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3", Pod:"calico-apiserver-84657f8b5b-tzhb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ae7dce5ac", MAC:"c2:90:a7:a0:dc:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:08.514370 containerd[1597]: 2025-09-13 00:28:08.503 [INFO][4665] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-tzhb7" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:08.596657 systemd-networkd[1244]: cali918c70442a3: Link UP Sep 13 00:28:08.597162 systemd-networkd[1244]: cali918c70442a3: Gained carrier Sep 13 00:28:08.616482 containerd[1597]: time="2025-09-13T00:28:08.615943015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:08.616482 containerd[1597]: time="2025-09-13T00:28:08.616020135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:08.616482 containerd[1597]: time="2025-09-13T00:28:08.616035215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:08.616482 containerd[1597]: time="2025-09-13T00:28:08.616141974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.346 [INFO][4645] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0 calico-apiserver-84657f8b5b- calico-apiserver f40c21ca-4f97-49b3-b97b-d3d765668104 950 0 2025-09-13 00:27:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84657f8b5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-5-n-c2bbffc425 calico-apiserver-84657f8b5b-zj9qr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali918c70442a3 [] [] }} ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-zj9qr" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.348 [INFO][4645] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-zj9qr" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.452 [INFO][4692] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" HandleID="k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.453 [INFO][4692] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" HandleID="k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-5-n-c2bbffc425", "pod":"calico-apiserver-84657f8b5b-zj9qr", "timestamp":"2025-09-13 00:28:08.450917269 +0000 UTC"}, Hostname:"ci-4081-3-5-n-c2bbffc425", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.454 [INFO][4692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.454 [INFO][4692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.454 [INFO][4692] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-c2bbffc425' Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.496 [INFO][4692] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.520 [INFO][4692] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.528 [INFO][4692] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.534 [INFO][4692] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.541 [INFO][4692] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.544 [INFO][4692] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.547 [INFO][4692] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.556 [INFO][4692] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.568 [INFO][4692] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.68/26] block=192.168.85.64/26 handle="k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.568 [INFO][4692] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.68/26] handle="k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.569 [INFO][4692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:08.633800 containerd[1597]: 2025-09-13 00:28:08.569 [INFO][4692] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.68/26] IPv6=[] ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" HandleID="k8s-pod-network.e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.634762 containerd[1597]: 2025-09-13 00:28:08.584 [INFO][4645] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-zj9qr" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0", GenerateName:"calico-apiserver-84657f8b5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f40c21ca-4f97-49b3-b97b-d3d765668104", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84657f8b5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"", Pod:"calico-apiserver-84657f8b5b-zj9qr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali918c70442a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:08.634762 containerd[1597]: 2025-09-13 00:28:08.584 [INFO][4645] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.68/32] ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-zj9qr" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.634762 containerd[1597]: 2025-09-13 00:28:08.585 [INFO][4645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali918c70442a3 ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-zj9qr" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.634762 containerd[1597]: 2025-09-13 00:28:08.597 [INFO][4645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-zj9qr" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.634762 containerd[1597]: 2025-09-13 00:28:08.601 [INFO][4645] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-zj9qr" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0", GenerateName:"calico-apiserver-84657f8b5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f40c21ca-4f97-49b3-b97b-d3d765668104", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84657f8b5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f", Pod:"calico-apiserver-84657f8b5b-zj9qr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali918c70442a3", MAC:"aa:12:d7:8c:e5:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:08.634762 containerd[1597]: 2025-09-13 00:28:08.621 [INFO][4645] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f" Namespace="calico-apiserver" Pod="calico-apiserver-84657f8b5b-zj9qr" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:08.697206 containerd[1597]: time="2025-09-13T00:28:08.695692245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:08.697206 containerd[1597]: time="2025-09-13T00:28:08.695745525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:08.697206 containerd[1597]: time="2025-09-13T00:28:08.695756445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:08.697206 containerd[1597]: time="2025-09-13T00:28:08.695852524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:08.718802 systemd-networkd[1244]: cali87977008fa7: Link UP Sep 13 00:28:08.719864 systemd-networkd[1244]: cali87977008fa7: Gained carrier Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.364 [INFO][4634] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0 coredns-7c65d6cfc9- kube-system fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8 947 0 2025-09-13 00:27:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-c2bbffc425 coredns-7c65d6cfc9-5d7t9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali87977008fa7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5d7t9" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.365 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5d7t9" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.532 [INFO][4697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" HandleID="k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.532 [INFO][4697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" HandleID="k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b640), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-c2bbffc425", "pod":"coredns-7c65d6cfc9-5d7t9", "timestamp":"2025-09-13 00:28:08.532326049 +0000 UTC"}, Hostname:"ci-4081-3-5-n-c2bbffc425", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.532 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.574 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.574 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-c2bbffc425' Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.599 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.624 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.649 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.654 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.664 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.664 [INFO][4697] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.668 [INFO][4697] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.680 [INFO][4697] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.695 [INFO][4697] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.69/26] block=192.168.85.64/26 handle="k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.697 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.69/26] handle="k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.699 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:08.786801 containerd[1597]: 2025-09-13 00:28:08.699 [INFO][4697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.69/26] IPv6=[] ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" HandleID="k8s-pod-network.6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.787663 containerd[1597]: 2025-09-13 00:28:08.709 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5d7t9" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"", Pod:"coredns-7c65d6cfc9-5d7t9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87977008fa7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:08.787663 containerd[1597]: 2025-09-13 00:28:08.710 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.69/32] ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5d7t9" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.787663 containerd[1597]: 2025-09-13 00:28:08.710 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87977008fa7 ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5d7t9" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.787663 containerd[1597]: 2025-09-13 00:28:08.720 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5d7t9" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.787663 containerd[1597]: 2025-09-13 00:28:08.744 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5d7t9" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad", Pod:"coredns-7c65d6cfc9-5d7t9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87977008fa7", MAC:"4e:33:55:30:e1:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:08.787663 containerd[1597]: 2025-09-13 00:28:08.777 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad" Namespace="kube-system" Pod="coredns-7c65d6cfc9-5d7t9" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:08.835432 containerd[1597]: time="2025-09-13T00:28:08.834804631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84657f8b5b-tzhb7,Uid:abb1d385-d781-4636-ae7d-8e9e046fdbb2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3\"" Sep 13 00:28:08.836172 containerd[1597]: time="2025-09-13T00:28:08.835553186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84657f8b5b-zj9qr,Uid:f40c21ca-4f97-49b3-b97b-d3d765668104,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f\"" Sep 13 00:28:08.842210 systemd-networkd[1244]: calic0f518361cf: Link UP Sep 13 00:28:08.848664 systemd-networkd[1244]: calic0f518361cf: Gained carrier Sep 13 00:28:08.866899 containerd[1597]: time="2025-09-13T00:28:08.866413716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:08.866899 containerd[1597]: time="2025-09-13T00:28:08.866501916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:08.866899 containerd[1597]: time="2025-09-13T00:28:08.866525796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:08.866899 containerd[1597]: time="2025-09-13T00:28:08.866778274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.398 [INFO][4647] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0 goldmane-7988f88666- calico-system 39b89f9a-86c8-4e11-a841-da097dc790e6 948 0 2025-09-13 00:27:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-5-n-c2bbffc425 goldmane-7988f88666-gt72g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic0f518361cf [] [] }} ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Namespace="calico-system" Pod="goldmane-7988f88666-gt72g" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.399 [INFO][4647] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Namespace="calico-system" Pod="goldmane-7988f88666-gt72g" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.543 [INFO][4702] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" HandleID="k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.543 [INFO][4702] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" HandleID="k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-c2bbffc425", "pod":"goldmane-7988f88666-gt72g", "timestamp":"2025-09-13 00:28:08.54359638 +0000 UTC"}, Hostname:"ci-4081-3-5-n-c2bbffc425", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.543 [INFO][4702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.702 [INFO][4702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.702 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-c2bbffc425' Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.729 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.747 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.760 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.765 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.780 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.780 [INFO][4702] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.794 [INFO][4702] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.813 [INFO][4702] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.829 [INFO][4702] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.70/26] block=192.168.85.64/26 handle="k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.829 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.70/26] handle="k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.829 [INFO][4702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:08.881644 containerd[1597]: 2025-09-13 00:28:08.829 [INFO][4702] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.70/26] IPv6=[] ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" HandleID="k8s-pod-network.98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.882202 containerd[1597]: 2025-09-13 00:28:08.836 [INFO][4647] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Namespace="calico-system" Pod="goldmane-7988f88666-gt72g" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"39b89f9a-86c8-4e11-a841-da097dc790e6", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"", Pod:"goldmane-7988f88666-gt72g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0f518361cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:08.882202 containerd[1597]: 2025-09-13 00:28:08.836 [INFO][4647] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.70/32] ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Namespace="calico-system" Pod="goldmane-7988f88666-gt72g" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.882202 containerd[1597]: 2025-09-13 00:28:08.836 [INFO][4647] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0f518361cf ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Namespace="calico-system" Pod="goldmane-7988f88666-gt72g" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.882202 containerd[1597]: 2025-09-13 00:28:08.850 [INFO][4647] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Namespace="calico-system" Pod="goldmane-7988f88666-gt72g" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.882202 containerd[1597]: 2025-09-13 00:28:08.855 [INFO][4647] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Namespace="calico-system" Pod="goldmane-7988f88666-gt72g" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"39b89f9a-86c8-4e11-a841-da097dc790e6", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c", Pod:"goldmane-7988f88666-gt72g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0f518361cf", MAC:"b2:63:70:60:d1:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:08.882202 containerd[1597]: 2025-09-13 00:28:08.876 [INFO][4647] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c" Namespace="calico-system" Pod="goldmane-7988f88666-gt72g" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:08.906078 sshd[4536]: Connection closed by authenticating user root 119.1.156.50 port 53246 [preauth] Sep 13 00:28:08.913323 systemd[1]: sshd@40-78.46.184.112:22-119.1.156.50:53246.service: Deactivated successfully. Sep 13 00:28:08.962227 containerd[1597]: time="2025-09-13T00:28:08.961078015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:08.962535 containerd[1597]: time="2025-09-13T00:28:08.962486046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:08.963103 containerd[1597]: time="2025-09-13T00:28:08.962633845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:08.963103 containerd[1597]: time="2025-09-13T00:28:08.962748845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:08.981623 containerd[1597]: time="2025-09-13T00:28:08.980063698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5d7t9,Uid:fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8,Namespace:kube-system,Attempt:1,} returns sandbox id \"6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad\"" Sep 13 00:28:08.992629 containerd[1597]: time="2025-09-13T00:28:08.992129744Z" level=info msg="CreateContainer within sandbox \"6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:28:09.026103 containerd[1597]: time="2025-09-13T00:28:09.026035321Z" level=info msg="CreateContainer within sandbox \"6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a0a9dbfc2f12bb245492617eb4b7037f391d9f2dc50fd58b3028cc889ad88ea\"" Sep 13 00:28:09.028856 containerd[1597]: time="2025-09-13T00:28:09.028808307Z" level=info msg="StartContainer for \"1a0a9dbfc2f12bb245492617eb4b7037f391d9f2dc50fd58b3028cc889ad88ea\"" Sep 13 00:28:09.094186 containerd[1597]: time="2025-09-13T00:28:09.093845256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-gt72g,Uid:39b89f9a-86c8-4e11-a841-da097dc790e6,Namespace:calico-system,Attempt:1,} returns sandbox id \"98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c\"" Sep 13 00:28:09.107625 systemd[1]: run-netns-cni\x2d0e8fa1d2\x2d03b5\x2df495\x2d082e\x2dbe58ae20c23e.mount: Deactivated successfully. Sep 13 00:28:09.134973 containerd[1597]: time="2025-09-13T00:28:09.134659647Z" level=info msg="StartContainer for \"1a0a9dbfc2f12bb245492617eb4b7037f391d9f2dc50fd58b3028cc889ad88ea\" returns successfully" Sep 13 00:28:09.148822 systemd[1]: Started sshd@41-78.46.184.112:22-119.1.156.50:55201.service - OpenSSH per-connection server daemon (119.1.156.50:55201). Sep 13 00:28:09.156563 systemd-networkd[1244]: calif106e6ea51b: Gained IPv6LL Sep 13 00:28:09.237207 kubelet[2780]: I0913 00:28:09.237060 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5d7t9" podStartSLOduration=42.237040845 podStartE2EDuration="42.237040845s" podCreationTimestamp="2025-09-13 00:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:09.234414979 +0000 UTC m=+47.585160543" watchObservedRunningTime="2025-09-13 00:28:09.237040845 +0000 UTC m=+47.587786409" Sep 13 00:28:09.451694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131389004.mount: Deactivated successfully. Sep 13 00:28:09.478503 containerd[1597]: time="2025-09-13T00:28:09.476805702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:09.478503 containerd[1597]: time="2025-09-13T00:28:09.478094616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 00:28:09.478503 containerd[1597]: time="2025-09-13T00:28:09.478115416Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:09.481426 containerd[1597]: time="2025-09-13T00:28:09.481371359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:09.482439 containerd[1597]: time="2025-09-13T00:28:09.482400154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.610903172s" Sep 13 00:28:09.482581 containerd[1597]: time="2025-09-13T00:28:09.482562993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:28:09.485388 containerd[1597]: time="2025-09-13T00:28:09.485349899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:28:09.487746 containerd[1597]: time="2025-09-13T00:28:09.487631607Z" level=info msg="CreateContainer within sandbox \"d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:28:09.513477 containerd[1597]: time="2025-09-13T00:28:09.513368236Z" level=info msg="CreateContainer within sandbox \"d252384c0f135ceb0d34c559b8cc295ae900dba9062a8b223e83bf4a85b9e169\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3f39ad0c4a81bc43c4f1ceece08cc158c36b6e5e6cb6a6eec5831d5ae95b6eef\"" Sep 13 00:28:09.515523 containerd[1597]: time="2025-09-13T00:28:09.515489545Z" level=info msg="StartContainer for \"3f39ad0c4a81bc43c4f1ceece08cc158c36b6e5e6cb6a6eec5831d5ae95b6eef\"" Sep 13 00:28:09.591300 containerd[1597]: time="2025-09-13T00:28:09.591251559Z" level=info msg="StartContainer for \"3f39ad0c4a81bc43c4f1ceece08cc158c36b6e5e6cb6a6eec5831d5ae95b6eef\" returns successfully" Sep 13 00:28:09.603712 systemd-networkd[1244]: calid4ae7dce5ac: Gained IPv6LL Sep 13 00:28:09.813364 containerd[1597]: time="2025-09-13T00:28:09.812650510Z" level=info msg="StopPodSandbox for \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\"" Sep 13 00:28:09.813364 containerd[1597]: time="2025-09-13T00:28:09.813042908Z" level=info msg="StopPodSandbox for \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\"" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.884 [INFO][5030] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.886 [INFO][5030] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" iface="eth0" netns="/var/run/netns/cni-08090108-ffb7-a2c5-2a8e-35c3897a8ae3" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.890 [INFO][5030] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" iface="eth0" netns="/var/run/netns/cni-08090108-ffb7-a2c5-2a8e-35c3897a8ae3" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.891 [INFO][5030] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" iface="eth0" netns="/var/run/netns/cni-08090108-ffb7-a2c5-2a8e-35c3897a8ae3" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.892 [INFO][5030] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.892 [INFO][5030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.927 [INFO][5044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.927 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.927 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.938 [WARNING][5044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.938 [INFO][5044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.940 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:09.950117 containerd[1597]: 2025-09-13 00:28:09.943 [INFO][5030] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:09.950117 containerd[1597]: time="2025-09-13T00:28:09.950178728Z" level=info msg="TearDown network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\" successfully" Sep 13 00:28:09.950117 containerd[1597]: time="2025-09-13T00:28:09.950205928Z" level=info msg="StopPodSandbox for \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\" returns successfully" Sep 13 00:28:09.951667 containerd[1597]: time="2025-09-13T00:28:09.951249163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686d78856d-wr74j,Uid:4856d853-be72-4277-97ae-7eb5ab571384,Namespace:calico-system,Attempt:1,}" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.892 [INFO][5029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.892 [INFO][5029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" iface="eth0" netns="/var/run/netns/cni-5e258d8e-0ad2-cfb9-9abf-122c30b5ec8d" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.892 [INFO][5029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" iface="eth0" netns="/var/run/netns/cni-5e258d8e-0ad2-cfb9-9abf-122c30b5ec8d" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.892 [INFO][5029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" iface="eth0" netns="/var/run/netns/cni-5e258d8e-0ad2-cfb9-9abf-122c30b5ec8d" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.892 [INFO][5029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.892 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.933 [INFO][5043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.934 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.940 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.955 [WARNING][5043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.955 [INFO][5043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.958 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:09.962393 containerd[1597]: 2025-09-13 00:28:09.960 [INFO][5029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:09.963134 containerd[1597]: time="2025-09-13T00:28:09.963030463Z" level=info msg="TearDown network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\" successfully" Sep 13 00:28:09.963134 containerd[1597]: time="2025-09-13T00:28:09.963059103Z" level=info msg="StopPodSandbox for \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\" returns successfully" Sep 13 00:28:09.965239 containerd[1597]: time="2025-09-13T00:28:09.964905013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-45289,Uid:07a817c1-95b0-4d38-86e3-6bb11e5dacbe,Namespace:kube-system,Attempt:1,}" Sep 13 00:28:10.052286 systemd-networkd[1244]: cali918c70442a3: Gained IPv6LL Sep 13 00:28:10.097369 systemd[1]: run-netns-cni\x2d5e258d8e\x2d0ad2\x2dcfb9\x2d9abf\x2d122c30b5ec8d.mount: Deactivated successfully. Sep 13 00:28:10.098693 systemd[1]: run-netns-cni\x2d08090108\x2dffb7\x2da2c5\x2d2a8e\x2d35c3897a8ae3.mount: Deactivated successfully. Sep 13 00:28:10.173596 systemd-networkd[1244]: calif601d647089: Link UP Sep 13 00:28:10.173845 systemd-networkd[1244]: calif601d647089: Gained carrier Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.032 [INFO][5065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0 coredns-7c65d6cfc9- kube-system 07a817c1-95b0-4d38-86e3-6bb11e5dacbe 981 0 2025-09-13 00:27:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-5-n-c2bbffc425 coredns-7c65d6cfc9-45289 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif601d647089 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45289" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.033 [INFO][5065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45289" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.071 [INFO][5083] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" HandleID="k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.072 [INFO][5083] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" HandleID="k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000272ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-5-n-c2bbffc425", "pod":"coredns-7c65d6cfc9-45289", "timestamp":"2025-09-13 00:28:10.071955019 +0000 UTC"}, Hostname:"ci-4081-3-5-n-c2bbffc425", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.072 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.072 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.072 [INFO][5083] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-c2bbffc425' Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.085 [INFO][5083] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.103 [INFO][5083] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.112 [INFO][5083] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.116 [INFO][5083] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.120 [INFO][5083] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.120 [INFO][5083] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.123 [INFO][5083] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11 Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.130 [INFO][5083] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.140 [INFO][5083] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.71/26] block=192.168.85.64/26 handle="k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.141 [INFO][5083] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.71/26] handle="k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.141 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:10.191483 containerd[1597]: 2025-09-13 00:28:10.141 [INFO][5083] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.71/26] IPv6=[] ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" HandleID="k8s-pod-network.be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:10.193412 containerd[1597]: 2025-09-13 00:28:10.151 [INFO][5065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45289" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"07a817c1-95b0-4d38-86e3-6bb11e5dacbe", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"", Pod:"coredns-7c65d6cfc9-45289", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif601d647089", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:10.193412 containerd[1597]: 2025-09-13 00:28:10.151 [INFO][5065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.71/32] ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45289" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:10.193412 containerd[1597]: 2025-09-13 00:28:10.151 [INFO][5065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif601d647089 ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45289" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:10.193412 containerd[1597]: 2025-09-13 00:28:10.172 [INFO][5065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45289" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:10.193412 containerd[1597]: 2025-09-13 00:28:10.172 [INFO][5065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45289" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"07a817c1-95b0-4d38-86e3-6bb11e5dacbe", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11", Pod:"coredns-7c65d6cfc9-45289", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif601d647089", MAC:"66:4a:32:59:4a:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:10.193412 containerd[1597]: 2025-09-13 00:28:10.188 [INFO][5065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45289" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:10.241131 kubelet[2780]: I0913 00:28:10.240438 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-c79f9869b-qwz99" podStartSLOduration=1.798970172 podStartE2EDuration="7.24041797s" podCreationTimestamp="2025-09-13 00:28:03 +0000 UTC" firstStartedPulling="2025-09-13 00:28:04.042356149 +0000 UTC m=+42.393101753" lastFinishedPulling="2025-09-13 00:28:09.483803947 +0000 UTC m=+47.834549551" observedRunningTime="2025-09-13 00:28:10.239602173 +0000 UTC m=+48.590347737" watchObservedRunningTime="2025-09-13 00:28:10.24041797 +0000 UTC m=+48.591163534" Sep 13 00:28:10.249645 containerd[1597]: time="2025-09-13T00:28:10.245309830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:10.249645 containerd[1597]: time="2025-09-13T00:28:10.247140702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:10.249645 containerd[1597]: time="2025-09-13T00:28:10.247168142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:10.249645 containerd[1597]: time="2025-09-13T00:28:10.247493221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:10.333709 sshd[4957]: Connection closed by authenticating user root 119.1.156.50 port 55201 [preauth] Sep 13 00:28:10.343205 systemd[1]: sshd@41-78.46.184.112:22-119.1.156.50:55201.service: Deactivated successfully. Sep 13 00:28:10.371665 systemd-networkd[1244]: cali023e9a495b2: Link UP Sep 13 00:28:10.377027 systemd-networkd[1244]: cali023e9a495b2: Gained carrier Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.026 [INFO][5056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0 calico-kube-controllers-686d78856d- calico-system 4856d853-be72-4277-97ae-7eb5ab571384 980 0 2025-09-13 00:27:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:686d78856d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-5-n-c2bbffc425 calico-kube-controllers-686d78856d-wr74j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali023e9a495b2 [] [] }} ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Namespace="calico-system" Pod="calico-kube-controllers-686d78856d-wr74j" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.026 [INFO][5056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Namespace="calico-system" Pod="calico-kube-controllers-686d78856d-wr74j" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.077 [INFO][5078] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" HandleID="k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.077 [INFO][5078] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" HandleID="k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-5-n-c2bbffc425", "pod":"calico-kube-controllers-686d78856d-wr74j", "timestamp":"2025-09-13 00:28:10.077002278 +0000 UTC"}, Hostname:"ci-4081-3-5-n-c2bbffc425", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.077 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.141 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.141 [INFO][5078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-5-n-c2bbffc425' Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.188 [INFO][5078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.204 [INFO][5078] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.220 [INFO][5078] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.231 [INFO][5078] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.236 [INFO][5078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.237 [INFO][5078] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.259 [INFO][5078] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.287 [INFO][5078] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.342 [INFO][5078] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.85.72/26] block=192.168.85.64/26 handle="k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.342 [INFO][5078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.72/26] handle="k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" host="ci-4081-3-5-n-c2bbffc425" Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.342 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:10.417326 containerd[1597]: 2025-09-13 00:28:10.342 [INFO][5078] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.85.72/26] IPv6=[] ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" HandleID="k8s-pod-network.277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:10.417996 containerd[1597]: 2025-09-13 00:28:10.355 [INFO][5056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Namespace="calico-system" Pod="calico-kube-controllers-686d78856d-wr74j" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0", GenerateName:"calico-kube-controllers-686d78856d-", Namespace:"calico-system", SelfLink:"", UID:"4856d853-be72-4277-97ae-7eb5ab571384", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"686d78856d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"", Pod:"calico-kube-controllers-686d78856d-wr74j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali023e9a495b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:10.417996 containerd[1597]: 2025-09-13 00:28:10.356 [INFO][5056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.72/32] ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Namespace="calico-system" Pod="calico-kube-controllers-686d78856d-wr74j" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:10.417996 containerd[1597]: 2025-09-13 00:28:10.356 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali023e9a495b2 ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Namespace="calico-system" Pod="calico-kube-controllers-686d78856d-wr74j" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:10.417996 containerd[1597]: 2025-09-13 00:28:10.377 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Namespace="calico-system" Pod="calico-kube-controllers-686d78856d-wr74j" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:10.417996 containerd[1597]: 2025-09-13 00:28:10.390 [INFO][5056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Namespace="calico-system" Pod="calico-kube-controllers-686d78856d-wr74j" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0", GenerateName:"calico-kube-controllers-686d78856d-", Namespace:"calico-system", SelfLink:"", UID:"4856d853-be72-4277-97ae-7eb5ab571384", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"686d78856d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd", Pod:"calico-kube-controllers-686d78856d-wr74j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali023e9a495b2", MAC:"26:07:ae:51:fc:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:10.417996 containerd[1597]: 2025-09-13 00:28:10.409 [INFO][5056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd" Namespace="calico-system" Pod="calico-kube-controllers-686d78856d-wr74j" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:10.426231 containerd[1597]: time="2025-09-13T00:28:10.424070499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-45289,Uid:07a817c1-95b0-4d38-86e3-6bb11e5dacbe,Namespace:kube-system,Attempt:1,} returns sandbox id \"be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11\"" Sep 13 00:28:10.442047 containerd[1597]: time="2025-09-13T00:28:10.441995626Z" level=info msg="CreateContainer within sandbox \"be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:28:10.481793 containerd[1597]: time="2025-09-13T00:28:10.480619908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:28:10.481793 containerd[1597]: time="2025-09-13T00:28:10.480709947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:28:10.481793 containerd[1597]: time="2025-09-13T00:28:10.480725587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:10.483635 containerd[1597]: time="2025-09-13T00:28:10.483402096Z" level=info msg="CreateContainer within sandbox \"be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b73184d806bf617335e48249af7616f631e291683c3a3d2a5f8555d62316b7b8\"" Sep 13 00:28:10.484325 containerd[1597]: time="2025-09-13T00:28:10.483327297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:28:10.488146 containerd[1597]: time="2025-09-13T00:28:10.487999958Z" level=info msg="StartContainer for \"b73184d806bf617335e48249af7616f631e291683c3a3d2a5f8555d62316b7b8\"" Sep 13 00:28:10.559503 systemd[1]: Started sshd@42-78.46.184.112:22-119.1.156.50:56079.service - OpenSSH per-connection server daemon (119.1.156.50:56079). Sep 13 00:28:10.627404 containerd[1597]: time="2025-09-13T00:28:10.627177669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686d78856d-wr74j,Uid:4856d853-be72-4277-97ae-7eb5ab571384,Namespace:calico-system,Attempt:1,} returns sandbox id \"277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd\"" Sep 13 00:28:10.641603 containerd[1597]: time="2025-09-13T00:28:10.641541290Z" level=info msg="StartContainer for \"b73184d806bf617335e48249af7616f631e291683c3a3d2a5f8555d62316b7b8\" returns successfully" Sep 13 00:28:10.691603 systemd-networkd[1244]: cali87977008fa7: Gained IPv6LL Sep 13 00:28:10.755993 systemd-networkd[1244]: calic0f518361cf: Gained IPv6LL Sep 13 00:28:11.430782 containerd[1597]: time="2025-09-13T00:28:11.430726603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:11.434506 containerd[1597]: time="2025-09-13T00:28:11.432410717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 00:28:11.434506 containerd[1597]: time="2025-09-13T00:28:11.432773076Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:11.436337 containerd[1597]: time="2025-09-13T00:28:11.436297505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:11.437392 containerd[1597]: time="2025-09-13T00:28:11.437349422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.951807484s" Sep 13 00:28:11.437948 containerd[1597]: time="2025-09-13T00:28:11.437919300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:28:11.441845 containerd[1597]: time="2025-09-13T00:28:11.441805608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:28:11.443048 containerd[1597]: time="2025-09-13T00:28:11.442514366Z" level=info msg="CreateContainer within sandbox \"226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:28:11.473359 containerd[1597]: time="2025-09-13T00:28:11.472786192Z" level=info msg="CreateContainer within sandbox \"226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"daff7414d04b4306098e7a7b8bb3bcb327e54208eda4a54d66d11ce3bbab5f41\"" Sep 13 00:28:11.474427 containerd[1597]: time="2025-09-13T00:28:11.474395507Z" level=info msg="StartContainer for \"daff7414d04b4306098e7a7b8bb3bcb327e54208eda4a54d66d11ce3bbab5f41\"" Sep 13 00:28:11.571863 containerd[1597]: time="2025-09-13T00:28:11.571807884Z" level=info msg="StartContainer for \"daff7414d04b4306098e7a7b8bb3bcb327e54208eda4a54d66d11ce3bbab5f41\" returns successfully" Sep 13 00:28:11.706174 sshd[5208]: Connection closed by authenticating user root 119.1.156.50 port 56079 [preauth] Sep 13 00:28:11.709703 systemd[1]: sshd@42-78.46.184.112:22-119.1.156.50:56079.service: Deactivated successfully. Sep 13 00:28:11.716915 systemd-networkd[1244]: calif601d647089: Gained IPv6LL Sep 13 00:28:11.936933 systemd[1]: Started sshd@43-78.46.184.112:22-119.1.156.50:57103.service - OpenSSH per-connection server daemon (119.1.156.50:57103). Sep 13 00:28:12.093049 systemd[1]: run-containerd-runc-k8s.io-daff7414d04b4306098e7a7b8bb3bcb327e54208eda4a54d66d11ce3bbab5f41-runc.6WoLdU.mount: Deactivated successfully. Sep 13 00:28:12.163937 systemd-networkd[1244]: cali023e9a495b2: Gained IPv6LL Sep 13 00:28:12.247137 kubelet[2780]: I0913 00:28:12.247058 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-45289" podStartSLOduration=45.247031218 podStartE2EDuration="45.247031218s" podCreationTimestamp="2025-09-13 00:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:28:11.24156283 +0000 UTC m=+49.592308394" watchObservedRunningTime="2025-09-13 00:28:12.247031218 +0000 UTC m=+50.597776822" Sep 13 00:28:13.057494 sshd[5281]: Connection closed by authenticating user root 119.1.156.50 port 57103 [preauth] Sep 13 00:28:13.060850 systemd[1]: sshd@43-78.46.184.112:22-119.1.156.50:57103.service: Deactivated successfully. Sep 13 00:28:13.288297 systemd[1]: Started sshd@44-78.46.184.112:22-119.1.156.50:58115.service - OpenSSH per-connection server daemon (119.1.156.50:58115). Sep 13 00:28:14.402236 containerd[1597]: time="2025-09-13T00:28:14.402186652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:14.404360 containerd[1597]: time="2025-09-13T00:28:14.404060532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 00:28:14.404723 containerd[1597]: time="2025-09-13T00:28:14.404654651Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:14.407595 containerd[1597]: time="2025-09-13T00:28:14.407555810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:14.408611 containerd[1597]: time="2025-09-13T00:28:14.408430250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.965744605s" Sep 13 00:28:14.408611 containerd[1597]: time="2025-09-13T00:28:14.408496770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:28:14.411783 containerd[1597]: time="2025-09-13T00:28:14.411724329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:28:14.416726 containerd[1597]: time="2025-09-13T00:28:14.416476087Z" level=info msg="CreateContainer within sandbox \"6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:28:14.423112 sshd[5289]: Connection closed by authenticating user root 119.1.156.50 port 58115 [preauth] Sep 13 00:28:14.426365 systemd[1]: sshd@44-78.46.184.112:22-119.1.156.50:58115.service: Deactivated successfully. Sep 13 00:28:14.437369 containerd[1597]: time="2025-09-13T00:28:14.437160280Z" level=info msg="CreateContainer within sandbox \"6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c4c4d57615dfacb665d2d26d8a5d7de7bd6c7ca55d1d113222a07919788aac22\"" Sep 13 00:28:14.440414 containerd[1597]: time="2025-09-13T00:28:14.439739519Z" level=info msg="StartContainer for \"c4c4d57615dfacb665d2d26d8a5d7de7bd6c7ca55d1d113222a07919788aac22\"" Sep 13 00:28:14.555373 containerd[1597]: time="2025-09-13T00:28:14.555318799Z" level=info msg="StartContainer for \"c4c4d57615dfacb665d2d26d8a5d7de7bd6c7ca55d1d113222a07919788aac22\" returns successfully" Sep 13 00:28:14.645803 systemd[1]: Started sshd@45-78.46.184.112:22-119.1.156.50:59615.service - OpenSSH per-connection server daemon (119.1.156.50:59615). Sep 13 00:28:14.798348 containerd[1597]: time="2025-09-13T00:28:14.797668435Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:14.800839 containerd[1597]: time="2025-09-13T00:28:14.800145714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:28:14.804945 containerd[1597]: time="2025-09-13T00:28:14.804890792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 393.117023ms" Sep 13 00:28:14.805220 containerd[1597]: time="2025-09-13T00:28:14.805098312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:28:14.809587 containerd[1597]: time="2025-09-13T00:28:14.809546711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:28:14.813972 containerd[1597]: time="2025-09-13T00:28:14.813817709Z" level=info msg="CreateContainer within sandbox \"e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:28:14.837665 containerd[1597]: time="2025-09-13T00:28:14.837493141Z" level=info msg="CreateContainer within sandbox \"e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9b092b465b57965ddffc3404fd8af7e869916338cb847b0e0b62439507b16b22\"" Sep 13 00:28:14.838376 containerd[1597]: time="2025-09-13T00:28:14.838317421Z" level=info msg="StartContainer for \"9b092b465b57965ddffc3404fd8af7e869916338cb847b0e0b62439507b16b22\"" Sep 13 00:28:14.939397 containerd[1597]: time="2025-09-13T00:28:14.939347706Z" level=info msg="StartContainer for \"9b092b465b57965ddffc3404fd8af7e869916338cb847b0e0b62439507b16b22\" returns successfully" Sep 13 00:28:15.289556 kubelet[2780]: I0913 00:28:15.288393 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84657f8b5b-tzhb7" podStartSLOduration=30.724105547 podStartE2EDuration="36.288375112s" podCreationTimestamp="2025-09-13 00:27:39 +0000 UTC" firstStartedPulling="2025-09-13 00:28:08.845439485 +0000 UTC m=+47.196185049" lastFinishedPulling="2025-09-13 00:28:14.40970905 +0000 UTC m=+52.760454614" observedRunningTime="2025-09-13 00:28:15.288331392 +0000 UTC m=+53.639076956" watchObservedRunningTime="2025-09-13 00:28:15.288375112 +0000 UTC m=+53.639120676" Sep 13 00:28:15.290387 kubelet[2780]: I0913 00:28:15.290044 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84657f8b5b-zj9qr" podStartSLOduration=30.333504329 podStartE2EDuration="36.290030313s" podCreationTimestamp="2025-09-13 00:27:39 +0000 UTC" firstStartedPulling="2025-09-13 00:28:08.851732527 +0000 UTC m=+47.202478091" lastFinishedPulling="2025-09-13 00:28:14.808258431 +0000 UTC m=+53.159004075" observedRunningTime="2025-09-13 00:28:15.269818863 +0000 UTC m=+53.620564387" watchObservedRunningTime="2025-09-13 00:28:15.290030313 +0000 UTC m=+53.640775837" Sep 13 00:28:15.762740 sshd[5342]: Connection closed by authenticating user root 119.1.156.50 port 59615 [preauth] Sep 13 00:28:15.768841 systemd[1]: sshd@45-78.46.184.112:22-119.1.156.50:59615.service: Deactivated successfully. Sep 13 00:28:15.999046 systemd[1]: Started sshd@46-78.46.184.112:22-119.1.156.50:60439.service - OpenSSH per-connection server daemon (119.1.156.50:60439). Sep 13 00:28:16.254867 kubelet[2780]: I0913 00:28:16.253970 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:28:17.140610 sshd[5396]: Connection closed by authenticating user root 119.1.156.50 port 60439 [preauth] Sep 13 00:28:17.143446 systemd[1]: sshd@46-78.46.184.112:22-119.1.156.50:60439.service: Deactivated successfully. Sep 13 00:28:17.361605 systemd[1]: Started sshd@47-78.46.184.112:22-119.1.156.50:61810.service - OpenSSH per-connection server daemon (119.1.156.50:61810). Sep 13 00:28:17.391567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405203774.mount: Deactivated successfully. Sep 13 00:28:17.811011 containerd[1597]: time="2025-09-13T00:28:17.809901101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:17.813166 containerd[1597]: time="2025-09-13T00:28:17.813089868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 00:28:17.814624 containerd[1597]: time="2025-09-13T00:28:17.814590391Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:17.818327 containerd[1597]: time="2025-09-13T00:28:17.818280719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:17.819313 containerd[1597]: time="2025-09-13T00:28:17.819144121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.00936485s" Sep 13 00:28:17.819531 containerd[1597]: time="2025-09-13T00:28:17.819442282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:28:17.822408 containerd[1597]: time="2025-09-13T00:28:17.822364688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:28:17.825378 containerd[1597]: time="2025-09-13T00:28:17.824773333Z" level=info msg="CreateContainer within sandbox \"98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:28:17.849099 containerd[1597]: time="2025-09-13T00:28:17.849051226Z" level=info msg="CreateContainer within sandbox \"98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f\"" Sep 13 00:28:17.851927 containerd[1597]: time="2025-09-13T00:28:17.850032228Z" level=info msg="StartContainer for \"6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f\"" Sep 13 00:28:17.947189 containerd[1597]: time="2025-09-13T00:28:17.947131718Z" level=info msg="StartContainer for \"6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f\" returns successfully" Sep 13 00:28:18.281483 kubelet[2780]: I0913 00:28:18.281394 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-gt72g" podStartSLOduration=25.565067214 podStartE2EDuration="34.281361661s" podCreationTimestamp="2025-09-13 00:27:44 +0000 UTC" firstStartedPulling="2025-09-13 00:28:09.104936839 +0000 UTC m=+47.455682403" lastFinishedPulling="2025-09-13 00:28:17.821231326 +0000 UTC m=+56.171976850" observedRunningTime="2025-09-13 00:28:18.280784579 +0000 UTC m=+56.631530143" watchObservedRunningTime="2025-09-13 00:28:18.281361661 +0000 UTC m=+56.632107225" Sep 13 00:28:18.462065 sshd[5407]: Connection closed by authenticating user root 119.1.156.50 port 61810 [preauth] Sep 13 00:28:18.465921 systemd[1]: sshd@47-78.46.184.112:22-119.1.156.50:61810.service: Deactivated successfully. Sep 13 00:28:18.682921 systemd[1]: Started sshd@48-78.46.184.112:22-119.1.156.50:62844.service - OpenSSH per-connection server daemon (119.1.156.50:62844). Sep 13 00:28:19.779748 sshd[5458]: Connection closed by authenticating user root 119.1.156.50 port 62844 [preauth] Sep 13 00:28:19.782667 systemd[1]: sshd@48-78.46.184.112:22-119.1.156.50:62844.service: Deactivated successfully. Sep 13 00:28:20.018961 systemd[1]: Started sshd@49-78.46.184.112:22-119.1.156.50:63815.service - OpenSSH per-connection server daemon (119.1.156.50:63815). Sep 13 00:28:21.140126 sshd[5483]: Connection closed by authenticating user root 119.1.156.50 port 63815 [preauth] Sep 13 00:28:21.143353 systemd[1]: sshd@49-78.46.184.112:22-119.1.156.50:63815.service: Deactivated successfully. Sep 13 00:28:21.144251 containerd[1597]: time="2025-09-13T00:28:21.144195392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:21.147849 containerd[1597]: time="2025-09-13T00:28:21.147254928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 00:28:21.148862 containerd[1597]: time="2025-09-13T00:28:21.148814536Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:21.151924 containerd[1597]: time="2025-09-13T00:28:21.151684231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:21.152253 containerd[1597]: time="2025-09-13T00:28:21.152213033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.329802745s" Sep 13 00:28:21.152253 containerd[1597]: time="2025-09-13T00:28:21.152251794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:28:21.162652 containerd[1597]: time="2025-09-13T00:28:21.162611407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:28:21.174193 containerd[1597]: time="2025-09-13T00:28:21.173998146Z" level=info msg="CreateContainer within sandbox \"277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:28:21.198280 containerd[1597]: time="2025-09-13T00:28:21.197571067Z" level=info msg="CreateContainer within sandbox \"277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7\"" Sep 13 00:28:21.198607 containerd[1597]: time="2025-09-13T00:28:21.198561192Z" level=info msg="StartContainer for \"7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7\"" Sep 13 00:28:21.271785 containerd[1597]: time="2025-09-13T00:28:21.271729850Z" level=info msg="StartContainer for \"7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7\" returns successfully" Sep 13 00:28:21.378836 systemd[1]: Started sshd@50-78.46.184.112:22-119.1.156.50:64581.service - OpenSSH per-connection server daemon (119.1.156.50:64581). Sep 13 00:28:21.784972 containerd[1597]: time="2025-09-13T00:28:21.784639616Z" level=info msg="StopPodSandbox for \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\"" Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.830 [WARNING][5592] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"39b89f9a-86c8-4e11-a841-da097dc790e6", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c", Pod:"goldmane-7988f88666-gt72g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0f518361cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.831 [INFO][5592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.831 [INFO][5592] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" iface="eth0" netns="" Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.831 [INFO][5592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.831 [INFO][5592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.855 [INFO][5601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.855 [INFO][5601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.855 [INFO][5601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.865 [WARNING][5601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.865 [INFO][5601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.867 [INFO][5601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:21.877954 containerd[1597]: 2025-09-13 00:28:21.873 [INFO][5592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:21.879162 containerd[1597]: time="2025-09-13T00:28:21.878662221Z" level=info msg="TearDown network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\" successfully" Sep 13 00:28:21.879162 containerd[1597]: time="2025-09-13T00:28:21.878709181Z" level=info msg="StopPodSandbox for \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\" returns successfully" Sep 13 00:28:21.881091 containerd[1597]: time="2025-09-13T00:28:21.881043953Z" level=info msg="RemovePodSandbox for \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\"" Sep 13 00:28:21.883858 containerd[1597]: time="2025-09-13T00:28:21.883711287Z" level=info msg="Forcibly stopping sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\"" Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.931 [WARNING][5615] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"39b89f9a-86c8-4e11-a841-da097dc790e6", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"98b97cd5fa7e69ec96add3765a27ea6f31061c69f0f0815af621f39a97b9053c", Pod:"goldmane-7988f88666-gt72g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0f518361cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.932 [INFO][5615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.932 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" iface="eth0" netns="" Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.932 [INFO][5615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.932 [INFO][5615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.955 [INFO][5622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.956 [INFO][5622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.956 [INFO][5622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.968 [WARNING][5622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.968 [INFO][5622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" HandleID="k8s-pod-network.9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-goldmane--7988f88666--gt72g-eth0" Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.971 [INFO][5622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:21.975585 containerd[1597]: 2025-09-13 00:28:21.974 [INFO][5615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2" Sep 13 00:28:21.976227 containerd[1597]: time="2025-09-13T00:28:21.975638441Z" level=info msg="TearDown network for sandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\" successfully" Sep 13 00:28:21.982400 containerd[1597]: time="2025-09-13T00:28:21.982277755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:21.982400 containerd[1597]: time="2025-09-13T00:28:21.982402516Z" level=info msg="RemovePodSandbox \"9fd81aabca88f53d28782a10e5b99eae1981c519252c24447c0896ceec819fa2\" returns successfully" Sep 13 00:28:21.983354 containerd[1597]: time="2025-09-13T00:28:21.983061759Z" level=info msg="StopPodSandbox for \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\"" Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.026 [WARNING][5636] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58a81445-bd68-46b0-ad45-3005b2bad0b3", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53", Pod:"csi-node-driver-zcpvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif106e6ea51b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.026 [INFO][5636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.026 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" iface="eth0" netns="" Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.026 [INFO][5636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.026 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.052 [INFO][5643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.052 [INFO][5643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.052 [INFO][5643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.062 [WARNING][5643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.062 [INFO][5643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.065 [INFO][5643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.069083 containerd[1597]: 2025-09-13 00:28:22.066 [INFO][5636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:22.070812 containerd[1597]: time="2025-09-13T00:28:22.069529132Z" level=info msg="TearDown network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\" successfully" Sep 13 00:28:22.070812 containerd[1597]: time="2025-09-13T00:28:22.069625013Z" level=info msg="StopPodSandbox for \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\" returns successfully" Sep 13 00:28:22.071745 containerd[1597]: time="2025-09-13T00:28:22.071221102Z" level=info msg="RemovePodSandbox for \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\"" Sep 13 00:28:22.071745 containerd[1597]: time="2025-09-13T00:28:22.071273702Z" level=info msg="Forcibly stopping sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\"" Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.141 [WARNING][5657] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58a81445-bd68-46b0-ad45-3005b2bad0b3", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53", Pod:"csi-node-driver-zcpvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif106e6ea51b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.141 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.141 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" iface="eth0" netns="" Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.141 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.141 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.170 [INFO][5664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.170 [INFO][5664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.170 [INFO][5664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.181 [WARNING][5664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.181 [INFO][5664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" HandleID="k8s-pod-network.dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Workload="ci--4081--3--5--n--c2bbffc425-k8s-csi--node--driver--zcpvc-eth0" Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.183 [INFO][5664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.187557 containerd[1597]: 2025-09-13 00:28:22.184 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b" Sep 13 00:28:22.187557 containerd[1597]: time="2025-09-13T00:28:22.186319975Z" level=info msg="TearDown network for sandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\" successfully" Sep 13 00:28:22.190679 containerd[1597]: time="2025-09-13T00:28:22.190598680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:22.190788 containerd[1597]: time="2025-09-13T00:28:22.190730481Z" level=info msg="RemovePodSandbox \"dd0863f3795c241b97d16f823f19bf8d2fe96b1ca10c5406084231c4d03f8b8b\" returns successfully" Sep 13 00:28:22.191726 containerd[1597]: time="2025-09-13T00:28:22.191690687Z" level=info msg="StopPodSandbox for \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\"" Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.247 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"07a817c1-95b0-4d38-86e3-6bb11e5dacbe", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11", Pod:"coredns-7c65d6cfc9-45289", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif601d647089", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.247 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.247 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" iface="eth0" netns="" Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.248 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.248 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.269 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.269 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.269 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.279 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.279 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.281 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.285574 containerd[1597]: 2025-09-13 00:28:22.283 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:22.285574 containerd[1597]: time="2025-09-13T00:28:22.285445595Z" level=info msg="TearDown network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\" successfully" Sep 13 00:28:22.285574 containerd[1597]: time="2025-09-13T00:28:22.285534196Z" level=info msg="StopPodSandbox for \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\" returns successfully" Sep 13 00:28:22.287199 containerd[1597]: time="2025-09-13T00:28:22.286574162Z" level=info msg="RemovePodSandbox for \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\"" Sep 13 00:28:22.287199 containerd[1597]: time="2025-09-13T00:28:22.286611322Z" level=info msg="Forcibly stopping sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\"" Sep 13 00:28:22.358607 kubelet[2780]: I0913 00:28:22.356908 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-686d78856d-wr74j" podStartSLOduration=27.83619098 podStartE2EDuration="38.356885493s" podCreationTimestamp="2025-09-13 00:27:44 +0000 UTC" firstStartedPulling="2025-09-13 00:28:10.632649646 +0000 UTC m=+48.983395210" lastFinishedPulling="2025-09-13 00:28:21.153344159 +0000 UTC m=+59.504089723" observedRunningTime="2025-09-13 00:28:21.31051141 +0000 UTC m=+59.661256974" watchObservedRunningTime="2025-09-13 00:28:22.356885493 +0000 UTC m=+60.707631057" Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.344 [WARNING][5701] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"07a817c1-95b0-4d38-86e3-6bb11e5dacbe", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"be1082f5e13ca49b44b51f6a919deb70bf2710e4a7f05a6ef187441fc8114a11", Pod:"coredns-7c65d6cfc9-45289", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif601d647089", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.345 [INFO][5701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.345 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" iface="eth0" netns="" Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.345 [INFO][5701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.345 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.379 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.380 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.380 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.395 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.395 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" HandleID="k8s-pod-network.3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--45289-eth0" Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.397 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.400871 containerd[1597]: 2025-09-13 00:28:22.399 [INFO][5701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a" Sep 13 00:28:22.401398 containerd[1597]: time="2025-09-13T00:28:22.400920351Z" level=info msg="TearDown network for sandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\" successfully" Sep 13 00:28:22.405045 containerd[1597]: time="2025-09-13T00:28:22.404973174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:22.405166 containerd[1597]: time="2025-09-13T00:28:22.405107615Z" level=info msg="RemovePodSandbox \"3ee0996c23852dfcfcab9b904673385efc1095d6afc8ee1610df5ed918a5df3a\" returns successfully" Sep 13 00:28:22.406611 containerd[1597]: time="2025-09-13T00:28:22.406580384Z" level=info msg="StopPodSandbox for \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\"" Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.450 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0", GenerateName:"calico-apiserver-84657f8b5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f40c21ca-4f97-49b3-b97b-d3d765668104", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84657f8b5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f", Pod:"calico-apiserver-84657f8b5b-zj9qr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali918c70442a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.451 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.451 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" iface="eth0" netns="" Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.451 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.451 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.474 [INFO][5750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.474 [INFO][5750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.474 [INFO][5750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.485 [WARNING][5750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.486 [INFO][5750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.488 [INFO][5750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.497553 containerd[1597]: 2025-09-13 00:28:22.492 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:22.497553 containerd[1597]: time="2025-09-13T00:28:22.497216634Z" level=info msg="TearDown network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\" successfully" Sep 13 00:28:22.497553 containerd[1597]: time="2025-09-13T00:28:22.497263474Z" level=info msg="StopPodSandbox for \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\" returns successfully" Sep 13 00:28:22.499066 containerd[1597]: time="2025-09-13T00:28:22.498692082Z" level=info msg="RemovePodSandbox for \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\"" Sep 13 00:28:22.499066 containerd[1597]: time="2025-09-13T00:28:22.498776763Z" level=info msg="Forcibly stopping sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\"" Sep 13 00:28:22.521479 sshd[5579]: Connection closed by authenticating user root 119.1.156.50 port 64581 [preauth] Sep 13 00:28:22.523894 systemd[1]: sshd@50-78.46.184.112:22-119.1.156.50:64581.service: Deactivated successfully. Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.543 [WARNING][5764] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0", GenerateName:"calico-apiserver-84657f8b5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f40c21ca-4f97-49b3-b97b-d3d765668104", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84657f8b5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"e7c87f908178757bd1c2e36d7f90665b599792b94f95f2318042b194295e447f", Pod:"calico-apiserver-84657f8b5b-zj9qr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali918c70442a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.544 [INFO][5764] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.544 [INFO][5764] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" iface="eth0" netns="" Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.544 [INFO][5764] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.544 [INFO][5764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.567 [INFO][5774] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.567 [INFO][5774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.567 [INFO][5774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.580 [WARNING][5774] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.580 [INFO][5774] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" HandleID="k8s-pod-network.3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--zj9qr-eth0" Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.583 [INFO][5774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.587506 containerd[1597]: 2025-09-13 00:28:22.584 [INFO][5764] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e" Sep 13 00:28:22.587506 containerd[1597]: time="2025-09-13T00:28:22.586556876Z" level=info msg="TearDown network for sandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\" successfully" Sep 13 00:28:22.590839 containerd[1597]: time="2025-09-13T00:28:22.590786301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:22.590927 containerd[1597]: time="2025-09-13T00:28:22.590861022Z" level=info msg="RemovePodSandbox \"3c78f9162ec38265b415c16776d7558d6a2071a9e9397c3aba1981bfb9ebb42e\" returns successfully" Sep 13 00:28:22.591773 containerd[1597]: time="2025-09-13T00:28:22.591407745Z" level=info msg="StopPodSandbox for \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\"" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.640 [WARNING][5788] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.641 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.641 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" iface="eth0" netns="" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.641 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.641 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.666 [INFO][5795] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.667 [INFO][5795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.667 [INFO][5795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.680 [WARNING][5795] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.680 [INFO][5795] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.683 [INFO][5795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.687675 containerd[1597]: 2025-09-13 00:28:22.686 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:22.688509 containerd[1597]: time="2025-09-13T00:28:22.688079030Z" level=info msg="TearDown network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\" successfully" Sep 13 00:28:22.688509 containerd[1597]: time="2025-09-13T00:28:22.688110110Z" level=info msg="StopPodSandbox for \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\" returns successfully" Sep 13 00:28:22.690598 containerd[1597]: time="2025-09-13T00:28:22.688627593Z" level=info msg="RemovePodSandbox for \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\"" Sep 13 00:28:22.690598 containerd[1597]: time="2025-09-13T00:28:22.688666874Z" level=info msg="Forcibly stopping sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\"" Sep 13 00:28:22.736794 systemd[1]: Started sshd@51-78.46.184.112:22-119.1.156.50:1956.service - OpenSSH per-connection server daemon (119.1.156.50:1956). Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.744 [WARNING][5810] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" WorkloadEndpoint="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.744 [INFO][5810] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.744 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" iface="eth0" netns="" Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.744 [INFO][5810] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.744 [INFO][5810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.799 [INFO][5818] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.799 [INFO][5818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.799 [INFO][5818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.812 [WARNING][5818] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.812 [INFO][5818] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" HandleID="k8s-pod-network.438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Workload="ci--4081--3--5--n--c2bbffc425-k8s-whisker--784cfb86b9--clzw2-eth0" Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.814 [INFO][5818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.818593 containerd[1597]: 2025-09-13 00:28:22.816 [INFO][5810] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867" Sep 13 00:28:22.818593 containerd[1597]: time="2025-09-13T00:28:22.817731349Z" level=info msg="TearDown network for sandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\" successfully" Sep 13 00:28:22.822304 containerd[1597]: time="2025-09-13T00:28:22.822256255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:22.822382 containerd[1597]: time="2025-09-13T00:28:22.822349096Z" level=info msg="RemovePodSandbox \"438fde23b789bf74ae4d09a6350b4c47f0f0c989633fb5e96488e40d87200867\" returns successfully" Sep 13 00:28:22.822899 containerd[1597]: time="2025-09-13T00:28:22.822873979Z" level=info msg="StopPodSandbox for \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\"" Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.877 [WARNING][5833] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0", GenerateName:"calico-kube-controllers-686d78856d-", Namespace:"calico-system", SelfLink:"", UID:"4856d853-be72-4277-97ae-7eb5ab571384", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"686d78856d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd", Pod:"calico-kube-controllers-686d78856d-wr74j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali023e9a495b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.878 [INFO][5833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.878 [INFO][5833] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" iface="eth0" netns="" Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.878 [INFO][5833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.878 [INFO][5833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.914 [INFO][5841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.914 [INFO][5841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.914 [INFO][5841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.926 [WARNING][5841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.926 [INFO][5841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.929 [INFO][5841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:22.935261 containerd[1597]: 2025-09-13 00:28:22.931 [INFO][5833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:22.935947 containerd[1597]: time="2025-09-13T00:28:22.935312756Z" level=info msg="TearDown network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\" successfully" Sep 13 00:28:22.935947 containerd[1597]: time="2025-09-13T00:28:22.935339517Z" level=info msg="StopPodSandbox for \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\" returns successfully" Sep 13 00:28:22.936787 containerd[1597]: time="2025-09-13T00:28:22.936420443Z" level=info msg="RemovePodSandbox for \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\"" Sep 13 00:28:22.936787 containerd[1597]: time="2025-09-13T00:28:22.936512643Z" level=info msg="Forcibly stopping sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\"" Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:22.981 [WARNING][5858] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0", GenerateName:"calico-kube-controllers-686d78856d-", Namespace:"calico-system", SelfLink:"", UID:"4856d853-be72-4277-97ae-7eb5ab571384", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"686d78856d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"277bfec78962bc4051e792c8a5e7ef9267254bcf87f6adad6a621700863c3cfd", Pod:"calico-kube-controllers-686d78856d-wr74j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali023e9a495b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:22.981 [INFO][5858] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:22.982 [INFO][5858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" iface="eth0" netns="" Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:22.982 [INFO][5858] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:22.982 [INFO][5858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:23.011 [INFO][5865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:23.011 [INFO][5865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:23.011 [INFO][5865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:23.021 [WARNING][5865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:23.021 [INFO][5865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" HandleID="k8s-pod-network.02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--kube--controllers--686d78856d--wr74j-eth0" Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:23.027 [INFO][5865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:23.030768 containerd[1597]: 2025-09-13 00:28:23.029 [INFO][5858] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81" Sep 13 00:28:23.032319 containerd[1597]: time="2025-09-13T00:28:23.031440459Z" level=info msg="TearDown network for sandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\" successfully" Sep 13 00:28:23.056437 containerd[1597]: time="2025-09-13T00:28:23.056008219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:23.056437 containerd[1597]: time="2025-09-13T00:28:23.056139820Z" level=info msg="RemovePodSandbox \"02e719f4b5ec962f7cbab56b0e90142f9a3e1b247849af95a9ea150cf595ee81\" returns successfully" Sep 13 00:28:23.056928 containerd[1597]: time="2025-09-13T00:28:23.056845424Z" level=info msg="StopPodSandbox for \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\"" Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.116 [WARNING][5879] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad", Pod:"coredns-7c65d6cfc9-5d7t9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87977008fa7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.116 [INFO][5879] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.116 [INFO][5879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" iface="eth0" netns="" Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.116 [INFO][5879] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.116 [INFO][5879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.150 [INFO][5886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.150 [INFO][5886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.150 [INFO][5886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.163 [WARNING][5886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.163 [INFO][5886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.166 [INFO][5886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:23.171781 containerd[1597]: 2025-09-13 00:28:23.168 [INFO][5879] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:23.172816 containerd[1597]: time="2025-09-13T00:28:23.171790654Z" level=info msg="TearDown network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\" successfully" Sep 13 00:28:23.172816 containerd[1597]: time="2025-09-13T00:28:23.171818734Z" level=info msg="StopPodSandbox for \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\" returns successfully" Sep 13 00:28:23.172816 containerd[1597]: time="2025-09-13T00:28:23.172296817Z" level=info msg="RemovePodSandbox for \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\"" Sep 13 00:28:23.172816 containerd[1597]: time="2025-09-13T00:28:23.172334977Z" level=info msg="Forcibly stopping sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\"" Sep 13 00:28:23.177848 containerd[1597]: time="2025-09-13T00:28:23.177732893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 00:28:23.178685 containerd[1597]: time="2025-09-13T00:28:23.178634578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:23.181862 containerd[1597]: time="2025-09-13T00:28:23.181817759Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:23.182208 containerd[1597]: time="2025-09-13T00:28:23.182158681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.019360633s" Sep 13 00:28:23.182208 containerd[1597]: time="2025-09-13T00:28:23.182196282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:28:23.182683 containerd[1597]: time="2025-09-13T00:28:23.182633724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:28:23.185926 containerd[1597]: time="2025-09-13T00:28:23.185805545Z" level=info msg="CreateContainer within sandbox \"226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:28:23.211339 containerd[1597]: time="2025-09-13T00:28:23.206987163Z" level=info msg="CreateContainer within sandbox \"226b74d45a33e734cb64d4ca60d6662a788d44db5c35a0a961fd32f8aa5cbe53\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"46fbad004c7d0409d9d7c9d46f140248de8d96629cbc5983ede18a0b5aaf2d12\"" Sep 13 00:28:23.211339 containerd[1597]: time="2025-09-13T00:28:23.208010970Z" level=info msg="StartContainer for \"46fbad004c7d0409d9d7c9d46f140248de8d96629cbc5983ede18a0b5aaf2d12\"" Sep 13 00:28:23.326280 containerd[1597]: time="2025-09-13T00:28:23.325980659Z" level=info msg="StartContainer for \"46fbad004c7d0409d9d7c9d46f140248de8d96629cbc5983ede18a0b5aaf2d12\" returns successfully" Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.276 [WARNING][5900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fe9ddbd8-90b0-47ac-b5e3-cbfcbcd96fb8", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"6bd79abd35e9e36aa82faa7572efcafcb5a174853e340d10527e86370812e0ad", Pod:"coredns-7c65d6cfc9-5d7t9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87977008fa7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.276 [INFO][5900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.276 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" iface="eth0" netns="" Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.277 [INFO][5900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.278 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.317 [INFO][5930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.317 [INFO][5930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.317 [INFO][5930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.330 [WARNING][5930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.331 [INFO][5930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" HandleID="k8s-pod-network.4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Workload="ci--4081--3--5--n--c2bbffc425-k8s-coredns--7c65d6cfc9--5d7t9-eth0" Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.332 [INFO][5930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:23.335475 containerd[1597]: 2025-09-13 00:28:23.334 [INFO][5900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6" Sep 13 00:28:23.338096 containerd[1597]: time="2025-09-13T00:28:23.335524281Z" level=info msg="TearDown network for sandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\" successfully" Sep 13 00:28:23.340206 containerd[1597]: time="2025-09-13T00:28:23.339933510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:23.340206 containerd[1597]: time="2025-09-13T00:28:23.340015430Z" level=info msg="RemovePodSandbox \"4556963d5bc435c20ebf42d73890797822e1e8e73e5befcedd25190391ee00c6\" returns successfully" Sep 13 00:28:23.341168 containerd[1597]: time="2025-09-13T00:28:23.340902156Z" level=info msg="StopPodSandbox for \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\"" Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.387 [WARNING][5956] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0", GenerateName:"calico-apiserver-84657f8b5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abb1d385-d781-4636-ae7d-8e9e046fdbb2", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84657f8b5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3", Pod:"calico-apiserver-84657f8b5b-tzhb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ae7dce5ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.388 [INFO][5956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.388 [INFO][5956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" iface="eth0" netns="" Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.388 [INFO][5956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.388 [INFO][5956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.410 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.410 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.411 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.426 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.426 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.428 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:23.432055 containerd[1597]: 2025-09-13 00:28:23.430 [INFO][5956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:23.433185 containerd[1597]: time="2025-09-13T00:28:23.432093351Z" level=info msg="TearDown network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\" successfully" Sep 13 00:28:23.433185 containerd[1597]: time="2025-09-13T00:28:23.432118551Z" level=info msg="StopPodSandbox for \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\" returns successfully" Sep 13 00:28:23.433185 containerd[1597]: time="2025-09-13T00:28:23.432883236Z" level=info msg="RemovePodSandbox for \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\"" Sep 13 00:28:23.433185 containerd[1597]: time="2025-09-13T00:28:23.432914036Z" level=info msg="Forcibly stopping sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\"" Sep 13 00:28:23.464218 kubelet[2780]: I0913 00:28:23.463208 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.475 [WARNING][5978] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0", GenerateName:"calico-apiserver-84657f8b5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"abb1d385-d781-4636-ae7d-8e9e046fdbb2", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84657f8b5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-5-n-c2bbffc425", ContainerID:"6cce5c941722e7507cf4c5b80f3bb374591da43150e11e331c0a2af7194684c3", Pod:"calico-apiserver-84657f8b5b-tzhb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid4ae7dce5ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.475 [INFO][5978] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.475 [INFO][5978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" iface="eth0" netns="" Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.475 [INFO][5978] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.475 [INFO][5978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.517 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.517 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.517 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.527 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.528 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" HandleID="k8s-pod-network.58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Workload="ci--4081--3--5--n--c2bbffc425-k8s-calico--apiserver--84657f8b5b--tzhb7-eth0" Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.532 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:28:23.536005 containerd[1597]: 2025-09-13 00:28:23.533 [INFO][5978] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2" Sep 13 00:28:23.537498 containerd[1597]: time="2025-09-13T00:28:23.536872914Z" level=info msg="TearDown network for sandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\" successfully" Sep 13 00:28:23.543150 containerd[1597]: time="2025-09-13T00:28:23.543089434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:28:23.543393 containerd[1597]: time="2025-09-13T00:28:23.543370276Z" level=info msg="RemovePodSandbox \"58a3b4f809870d7c2d7f90f78c3ef19ba6a46a8439418f8407fbb72f35107eb2\" returns successfully" Sep 13 00:28:23.845085 sshd[5816]: Connection closed by authenticating user root 119.1.156.50 port 1956 [preauth] Sep 13 00:28:23.851947 systemd[1]: sshd@51-78.46.184.112:22-119.1.156.50:1956.service: Deactivated successfully. Sep 13 00:28:23.943774 kubelet[2780]: I0913 00:28:23.943710 2780 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:28:23.943774 kubelet[2780]: I0913 00:28:23.943765 2780 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:28:24.086781 systemd[1]: Started sshd@52-78.46.184.112:22-119.1.156.50:3128.service - OpenSSH per-connection server daemon (119.1.156.50:3128). Sep 13 00:28:24.350764 kubelet[2780]: I0913 00:28:24.350644 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zcpvc" podStartSLOduration=24.42261805 podStartE2EDuration="40.350624205s" podCreationTimestamp="2025-09-13 00:27:44 +0000 UTC" firstStartedPulling="2025-09-13 00:28:07.255780457 +0000 UTC m=+45.606526061" lastFinishedPulling="2025-09-13 00:28:23.183786652 +0000 UTC m=+61.534532216" observedRunningTime="2025-09-13 00:28:24.349798159 +0000 UTC m=+62.700543723" watchObservedRunningTime="2025-09-13 00:28:24.350624205 +0000 UTC m=+62.701369769" Sep 13 00:28:25.214610 sshd[6036]: Connection closed by authenticating user root 119.1.156.50 port 3128 [preauth] Sep 13 00:28:25.220592 systemd[1]: sshd@52-78.46.184.112:22-119.1.156.50:3128.service: Deactivated successfully. Sep 13 00:28:25.448921 systemd[1]: Started sshd@53-78.46.184.112:22-119.1.156.50:4130.service - OpenSSH per-connection server daemon (119.1.156.50:4130). Sep 13 00:28:26.589021 sshd[6047]: Connection closed by authenticating user root 119.1.156.50 port 4130 [preauth] Sep 13 00:28:26.590884 systemd[1]: sshd@53-78.46.184.112:22-119.1.156.50:4130.service: Deactivated successfully. Sep 13 00:28:26.803793 systemd[1]: Started sshd@54-78.46.184.112:22-119.1.156.50:5040.service - OpenSSH per-connection server daemon (119.1.156.50:5040). Sep 13 00:28:27.886520 sshd[6075]: Connection closed by authenticating user root 119.1.156.50 port 5040 [preauth] Sep 13 00:28:27.889830 systemd[1]: sshd@54-78.46.184.112:22-119.1.156.50:5040.service: Deactivated successfully. Sep 13 00:28:28.129868 systemd[1]: Started sshd@55-78.46.184.112:22-119.1.156.50:6370.service - OpenSSH per-connection server daemon (119.1.156.50:6370). Sep 13 00:28:29.268802 sshd[6081]: Connection closed by authenticating user root 119.1.156.50 port 6370 [preauth] Sep 13 00:28:29.271937 systemd[1]: sshd@55-78.46.184.112:22-119.1.156.50:6370.service: Deactivated successfully. Sep 13 00:28:29.509579 systemd[1]: Started sshd@56-78.46.184.112:22-119.1.156.50:7361.service - OpenSSH per-connection server daemon (119.1.156.50:7361). Sep 13 00:28:30.662146 sshd[6088]: Connection closed by authenticating user root 119.1.156.50 port 7361 [preauth] Sep 13 00:28:30.664162 systemd[1]: sshd@56-78.46.184.112:22-119.1.156.50:7361.service: Deactivated successfully. Sep 13 00:28:30.879439 systemd[1]: Started sshd@57-78.46.184.112:22-119.1.156.50:8751.service - OpenSSH per-connection server daemon (119.1.156.50:8751). Sep 13 00:28:32.014677 sshd[6095]: Connection closed by authenticating user root 119.1.156.50 port 8751 [preauth] Sep 13 00:28:32.020344 systemd[1]: sshd@57-78.46.184.112:22-119.1.156.50:8751.service: Deactivated successfully. Sep 13 00:28:32.246204 systemd[1]: Started sshd@58-78.46.184.112:22-119.1.156.50:9633.service - OpenSSH per-connection server daemon (119.1.156.50:9633). Sep 13 00:28:33.362744 sshd[6100]: Connection closed by authenticating user root 119.1.156.50 port 9633 [preauth] Sep 13 00:28:33.368858 systemd[1]: sshd@58-78.46.184.112:22-119.1.156.50:9633.service: Deactivated successfully. Sep 13 00:28:33.605647 systemd[1]: Started sshd@59-78.46.184.112:22-119.1.156.50:10652.service - OpenSSH per-connection server daemon (119.1.156.50:10652). Sep 13 00:28:34.759680 sshd[6105]: Connection closed by authenticating user root 119.1.156.50 port 10652 [preauth] Sep 13 00:28:34.762857 systemd[1]: sshd@59-78.46.184.112:22-119.1.156.50:10652.service: Deactivated successfully. Sep 13 00:28:34.979890 systemd[1]: Started sshd@60-78.46.184.112:22-119.1.156.50:11678.service - OpenSSH per-connection server daemon (119.1.156.50:11678). Sep 13 00:28:36.103209 sshd[6110]: Connection closed by authenticating user root 119.1.156.50 port 11678 [preauth] Sep 13 00:28:36.111132 systemd[1]: sshd@60-78.46.184.112:22-119.1.156.50:11678.service: Deactivated successfully. Sep 13 00:28:36.321933 systemd[1]: Started sshd@61-78.46.184.112:22-119.1.156.50:13492.service - OpenSSH per-connection server daemon (119.1.156.50:13492). Sep 13 00:28:37.402779 sshd[6115]: Connection closed by authenticating user root 119.1.156.50 port 13492 [preauth] Sep 13 00:28:37.407879 systemd[1]: sshd@61-78.46.184.112:22-119.1.156.50:13492.service: Deactivated successfully. Sep 13 00:28:37.624416 systemd[1]: Started sshd@62-78.46.184.112:22-119.1.156.50:14448.service - OpenSSH per-connection server daemon (119.1.156.50:14448). Sep 13 00:28:38.710229 sshd[6120]: Connection closed by authenticating user root 119.1.156.50 port 14448 [preauth] Sep 13 00:28:38.713901 systemd[1]: sshd@62-78.46.184.112:22-119.1.156.50:14448.service: Deactivated successfully. Sep 13 00:28:38.951563 systemd[1]: Started sshd@63-78.46.184.112:22-119.1.156.50:15466.service - OpenSSH per-connection server daemon (119.1.156.50:15466). Sep 13 00:28:40.099377 sshd[6125]: Connection closed by authenticating user root 119.1.156.50 port 15466 [preauth] Sep 13 00:28:40.104021 systemd[1]: sshd@63-78.46.184.112:22-119.1.156.50:15466.service: Deactivated successfully. Sep 13 00:28:40.330918 systemd[1]: Started sshd@64-78.46.184.112:22-119.1.156.50:16393.service - OpenSSH per-connection server daemon (119.1.156.50:16393). Sep 13 00:28:41.464408 sshd[6130]: Connection closed by authenticating user root 119.1.156.50 port 16393 [preauth] Sep 13 00:28:41.466932 systemd[1]: sshd@64-78.46.184.112:22-119.1.156.50:16393.service: Deactivated successfully. Sep 13 00:28:41.695819 systemd[1]: Started sshd@65-78.46.184.112:22-119.1.156.50:17747.service - OpenSSH per-connection server daemon (119.1.156.50:17747). Sep 13 00:28:42.834186 sshd[6135]: Connection closed by authenticating user root 119.1.156.50 port 17747 [preauth] Sep 13 00:28:42.838222 systemd[1]: sshd@65-78.46.184.112:22-119.1.156.50:17747.service: Deactivated successfully. Sep 13 00:28:43.048411 systemd[1]: Started sshd@66-78.46.184.112:22-119.1.156.50:18696.service - OpenSSH per-connection server daemon (119.1.156.50:18696). Sep 13 00:28:43.253726 systemd[1]: run-containerd-runc-k8s.io-6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f-runc.OsQHau.mount: Deactivated successfully. Sep 13 00:28:44.156602 sshd[6140]: Connection closed by authenticating user root 119.1.156.50 port 18696 [preauth] Sep 13 00:28:44.167048 systemd[1]: sshd@66-78.46.184.112:22-119.1.156.50:18696.service: Deactivated successfully. Sep 13 00:28:44.404886 systemd[1]: Started sshd@67-78.46.184.112:22-119.1.156.50:20209.service - OpenSSH per-connection server daemon (119.1.156.50:20209). Sep 13 00:28:45.591992 sshd[6166]: Connection closed by authenticating user root 119.1.156.50 port 20209 [preauth] Sep 13 00:28:45.592993 systemd[1]: sshd@67-78.46.184.112:22-119.1.156.50:20209.service: Deactivated successfully. Sep 13 00:28:45.812733 systemd[1]: Started sshd@68-78.46.184.112:22-119.1.156.50:21121.service - OpenSSH per-connection server daemon (119.1.156.50:21121). Sep 13 00:28:46.964953 sshd[6177]: Connection closed by authenticating user root 119.1.156.50 port 21121 [preauth] Sep 13 00:28:46.971333 systemd[1]: sshd@68-78.46.184.112:22-119.1.156.50:21121.service: Deactivated successfully. Sep 13 00:28:47.206884 systemd[1]: Started sshd@69-78.46.184.112:22-119.1.156.50:22209.service - OpenSSH per-connection server daemon (119.1.156.50:22209). Sep 13 00:28:48.360122 sshd[6182]: Connection closed by authenticating user root 119.1.156.50 port 22209 [preauth] Sep 13 00:28:48.366431 systemd[1]: sshd@69-78.46.184.112:22-119.1.156.50:22209.service: Deactivated successfully. Sep 13 00:28:48.612853 systemd[1]: Started sshd@70-78.46.184.112:22-119.1.156.50:23153.service - OpenSSH per-connection server daemon (119.1.156.50:23153). Sep 13 00:28:49.791724 sshd[6187]: Connection closed by authenticating user root 119.1.156.50 port 23153 [preauth] Sep 13 00:28:49.795298 systemd[1]: sshd@70-78.46.184.112:22-119.1.156.50:23153.service: Deactivated successfully. Sep 13 00:28:50.013887 systemd[1]: Started sshd@71-78.46.184.112:22-119.1.156.50:25073.service - OpenSSH per-connection server daemon (119.1.156.50:25073). Sep 13 00:28:51.148175 sshd[6211]: Connection closed by authenticating user root 119.1.156.50 port 25073 [preauth] Sep 13 00:28:51.153952 systemd[1]: sshd@71-78.46.184.112:22-119.1.156.50:25073.service: Deactivated successfully. Sep 13 00:28:51.366891 systemd[1]: Started sshd@72-78.46.184.112:22-119.1.156.50:25981.service - OpenSSH per-connection server daemon (119.1.156.50:25981). Sep 13 00:28:52.448947 sshd[6218]: Connection closed by authenticating user root 119.1.156.50 port 25981 [preauth] Sep 13 00:28:52.453366 systemd[1]: sshd@72-78.46.184.112:22-119.1.156.50:25981.service: Deactivated successfully. Sep 13 00:28:52.688006 systemd[1]: Started sshd@73-78.46.184.112:22-119.1.156.50:26948.service - OpenSSH per-connection server daemon (119.1.156.50:26948). Sep 13 00:28:53.831495 sshd[6223]: Connection closed by authenticating user root 119.1.156.50 port 26948 [preauth] Sep 13 00:28:53.857844 systemd[1]: sshd@73-78.46.184.112:22-119.1.156.50:26948.service: Deactivated successfully. Sep 13 00:28:54.049038 systemd[1]: Started sshd@74-78.46.184.112:22-119.1.156.50:27989.service - OpenSSH per-connection server daemon (119.1.156.50:27989). Sep 13 00:28:55.153816 sshd[6267]: Connection closed by authenticating user root 119.1.156.50 port 27989 [preauth] Sep 13 00:28:55.156854 systemd[1]: sshd@74-78.46.184.112:22-119.1.156.50:27989.service: Deactivated successfully. Sep 13 00:28:55.414921 systemd[1]: Started sshd@75-78.46.184.112:22-119.1.156.50:29008.service - OpenSSH per-connection server daemon (119.1.156.50:29008). Sep 13 00:28:56.573536 sshd[6274]: Connection closed by authenticating user root 119.1.156.50 port 29008 [preauth] Sep 13 00:28:56.580029 systemd[1]: sshd@75-78.46.184.112:22-119.1.156.50:29008.service: Deactivated successfully. Sep 13 00:28:56.786285 systemd[1]: Started sshd@76-78.46.184.112:22-119.1.156.50:29932.service - OpenSSH per-connection server daemon (119.1.156.50:29932). Sep 13 00:28:57.894876 sshd[6300]: Connection closed by authenticating user root 119.1.156.50 port 29932 [preauth] Sep 13 00:28:57.900257 systemd[1]: sshd@76-78.46.184.112:22-119.1.156.50:29932.service: Deactivated successfully. Sep 13 00:28:58.113959 systemd[1]: Started sshd@77-78.46.184.112:22-119.1.156.50:31175.service - OpenSSH per-connection server daemon (119.1.156.50:31175). Sep 13 00:28:59.199679 sshd[6305]: Connection closed by authenticating user root 119.1.156.50 port 31175 [preauth] Sep 13 00:28:59.206808 systemd[1]: sshd@77-78.46.184.112:22-119.1.156.50:31175.service: Deactivated successfully. Sep 13 00:28:59.441135 systemd[1]: Started sshd@78-78.46.184.112:22-119.1.156.50:32192.service - OpenSSH per-connection server daemon (119.1.156.50:32192). Sep 13 00:29:00.573732 sshd[6312]: Connection closed by authenticating user root 119.1.156.50 port 32192 [preauth] Sep 13 00:29:00.578000 systemd[1]: sshd@78-78.46.184.112:22-119.1.156.50:32192.service: Deactivated successfully. Sep 13 00:29:00.815549 systemd[1]: Started sshd@79-78.46.184.112:22-119.1.156.50:33073.service - OpenSSH per-connection server daemon (119.1.156.50:33073). Sep 13 00:29:01.983011 sshd[6317]: Connection closed by authenticating user root 119.1.156.50 port 33073 [preauth] Sep 13 00:29:01.990305 systemd[1]: sshd@79-78.46.184.112:22-119.1.156.50:33073.service: Deactivated successfully. Sep 13 00:29:02.213805 systemd[1]: Started sshd@80-78.46.184.112:22-119.1.156.50:33950.service - OpenSSH per-connection server daemon (119.1.156.50:33950). Sep 13 00:29:03.373124 sshd[6322]: Connection closed by authenticating user root 119.1.156.50 port 33950 [preauth] Sep 13 00:29:03.376302 systemd[1]: sshd@80-78.46.184.112:22-119.1.156.50:33950.service: Deactivated successfully. Sep 13 00:29:03.616684 systemd[1]: Started sshd@81-78.46.184.112:22-119.1.156.50:34831.service - OpenSSH per-connection server daemon (119.1.156.50:34831). Sep 13 00:29:04.767991 sshd[6328]: Connection closed by authenticating user root 119.1.156.50 port 34831 [preauth] Sep 13 00:29:04.776486 systemd[1]: sshd@81-78.46.184.112:22-119.1.156.50:34831.service: Deactivated successfully. Sep 13 00:29:05.007410 systemd[1]: Started sshd@82-78.46.184.112:22-119.1.156.50:36067.service - OpenSSH per-connection server daemon (119.1.156.50:36067). Sep 13 00:29:06.116303 sshd[6333]: Connection closed by authenticating user root 119.1.156.50 port 36067 [preauth] Sep 13 00:29:06.120488 systemd[1]: sshd@82-78.46.184.112:22-119.1.156.50:36067.service: Deactivated successfully. Sep 13 00:29:06.358024 systemd[1]: Started sshd@83-78.46.184.112:22-119.1.156.50:36917.service - OpenSSH per-connection server daemon (119.1.156.50:36917). Sep 13 00:29:07.514245 sshd[6338]: Connection closed by authenticating user root 119.1.156.50 port 36917 [preauth] Sep 13 00:29:07.518032 systemd[1]: sshd@83-78.46.184.112:22-119.1.156.50:36917.service: Deactivated successfully. Sep 13 00:29:07.747947 systemd[1]: Started sshd@84-78.46.184.112:22-119.1.156.50:37932.service - OpenSSH per-connection server daemon (119.1.156.50:37932). Sep 13 00:29:08.882632 sshd[6343]: Connection closed by authenticating user root 119.1.156.50 port 37932 [preauth] Sep 13 00:29:08.887695 systemd[1]: sshd@84-78.46.184.112:22-119.1.156.50:37932.service: Deactivated successfully. Sep 13 00:29:09.099078 systemd[1]: Started sshd@85-78.46.184.112:22-119.1.156.50:38998.service - OpenSSH per-connection server daemon (119.1.156.50:38998). Sep 13 00:29:10.194511 sshd[6348]: Connection closed by authenticating user root 119.1.156.50 port 38998 [preauth] Sep 13 00:29:10.195822 systemd[1]: sshd@85-78.46.184.112:22-119.1.156.50:38998.service: Deactivated successfully. Sep 13 00:29:10.437354 systemd[1]: Started sshd@86-78.46.184.112:22-119.1.156.50:39870.service - OpenSSH per-connection server daemon (119.1.156.50:39870). Sep 13 00:29:11.571772 sshd[6354]: Connection closed by authenticating user root 119.1.156.50 port 39870 [preauth] Sep 13 00:29:11.578180 systemd[1]: sshd@86-78.46.184.112:22-119.1.156.50:39870.service: Deactivated successfully. Sep 13 00:29:11.792841 systemd[1]: Started sshd@87-78.46.184.112:22-119.1.156.50:40623.service - OpenSSH per-connection server daemon (119.1.156.50:40623). Sep 13 00:29:12.897172 sshd[6359]: Connection closed by authenticating user root 119.1.156.50 port 40623 [preauth] Sep 13 00:29:12.903431 systemd[1]: sshd@87-78.46.184.112:22-119.1.156.50:40623.service: Deactivated successfully. Sep 13 00:29:13.130127 systemd[1]: Started sshd@88-78.46.184.112:22-119.1.156.50:41794.service - OpenSSH per-connection server daemon (119.1.156.50:41794). Sep 13 00:29:14.242666 sshd[6364]: Connection closed by authenticating user root 119.1.156.50 port 41794 [preauth] Sep 13 00:29:14.249713 systemd[1]: sshd@88-78.46.184.112:22-119.1.156.50:41794.service: Deactivated successfully. Sep 13 00:29:14.485660 systemd[1]: Started sshd@89-78.46.184.112:22-119.1.156.50:42799.service - OpenSSH per-connection server daemon (119.1.156.50:42799). Sep 13 00:29:15.609556 sshd[6369]: Connection closed by authenticating user root 119.1.156.50 port 42799 [preauth] Sep 13 00:29:15.616617 systemd[1]: sshd@89-78.46.184.112:22-119.1.156.50:42799.service: Deactivated successfully. Sep 13 00:29:15.832816 systemd[1]: Started sshd@90-78.46.184.112:22-119.1.156.50:43788.service - OpenSSH per-connection server daemon (119.1.156.50:43788). Sep 13 00:29:16.728139 sshd[6374]: Invalid user user from 119.1.156.50 port 43788 Sep 13 00:29:16.944886 sshd[6374]: Connection closed by invalid user user 119.1.156.50 port 43788 [preauth] Sep 13 00:29:16.948271 systemd[1]: sshd@90-78.46.184.112:22-119.1.156.50:43788.service: Deactivated successfully. Sep 13 00:29:17.176583 systemd[1]: Started sshd@91-78.46.184.112:22-119.1.156.50:44730.service - OpenSSH per-connection server daemon (119.1.156.50:44730). Sep 13 00:29:18.080104 sshd[6379]: Invalid user user from 119.1.156.50 port 44730 Sep 13 00:29:18.300520 sshd[6379]: Connection closed by invalid user user 119.1.156.50 port 44730 [preauth] Sep 13 00:29:18.304613 systemd[1]: sshd@91-78.46.184.112:22-119.1.156.50:44730.service: Deactivated successfully. Sep 13 00:29:18.531090 systemd[1]: Started sshd@92-78.46.184.112:22-119.1.156.50:45619.service - OpenSSH per-connection server daemon (119.1.156.50:45619). Sep 13 00:29:19.435935 sshd[6384]: Invalid user user from 119.1.156.50 port 45619 Sep 13 00:29:19.657154 sshd[6384]: Connection closed by invalid user user 119.1.156.50 port 45619 [preauth] Sep 13 00:29:19.659675 systemd[1]: sshd@92-78.46.184.112:22-119.1.156.50:45619.service: Deactivated successfully. Sep 13 00:29:19.899311 systemd[1]: Started sshd@93-78.46.184.112:22-119.1.156.50:46576.service - OpenSSH per-connection server daemon (119.1.156.50:46576). Sep 13 00:29:20.827833 sshd[6389]: Invalid user user from 119.1.156.50 port 46576 Sep 13 00:29:21.054792 sshd[6389]: Connection closed by invalid user user 119.1.156.50 port 46576 [preauth] Sep 13 00:29:21.058443 systemd[1]: sshd@93-78.46.184.112:22-119.1.156.50:46576.service: Deactivated successfully. Sep 13 00:29:21.288026 systemd[1]: Started sshd@94-78.46.184.112:22-119.1.156.50:47662.service - OpenSSH per-connection server daemon (119.1.156.50:47662). Sep 13 00:29:22.198444 sshd[6394]: Invalid user user from 119.1.156.50 port 47662 Sep 13 00:29:22.421109 sshd[6394]: Connection closed by invalid user user 119.1.156.50 port 47662 [preauth] Sep 13 00:29:22.424082 systemd[1]: sshd@94-78.46.184.112:22-119.1.156.50:47662.service: Deactivated successfully. Sep 13 00:29:22.636772 systemd[1]: Started sshd@95-78.46.184.112:22-119.1.156.50:48478.service - OpenSSH per-connection server daemon (119.1.156.50:48478). Sep 13 00:29:23.511400 sshd[6401]: Invalid user user from 119.1.156.50 port 48478 Sep 13 00:29:23.724224 sshd[6401]: Connection closed by invalid user user 119.1.156.50 port 48478 [preauth] Sep 13 00:29:23.727063 systemd[1]: sshd@95-78.46.184.112:22-119.1.156.50:48478.service: Deactivated successfully. Sep 13 00:29:23.962837 systemd[1]: Started sshd@96-78.46.184.112:22-119.1.156.50:49334.service - OpenSSH per-connection server daemon (119.1.156.50:49334). Sep 13 00:29:24.883887 sshd[6448]: Invalid user user from 119.1.156.50 port 49334 Sep 13 00:29:25.107257 sshd[6448]: Connection closed by invalid user user 119.1.156.50 port 49334 [preauth] Sep 13 00:29:25.110544 systemd[1]: sshd@96-78.46.184.112:22-119.1.156.50:49334.service: Deactivated successfully. Sep 13 00:29:25.340024 systemd[1]: Started sshd@97-78.46.184.112:22-119.1.156.50:50362.service - OpenSSH per-connection server daemon (119.1.156.50:50362). Sep 13 00:29:26.230532 sshd[6459]: Invalid user user from 119.1.156.50 port 50362 Sep 13 00:29:26.448596 sshd[6459]: Connection closed by invalid user user 119.1.156.50 port 50362 [preauth] Sep 13 00:29:26.450896 systemd[1]: sshd@97-78.46.184.112:22-119.1.156.50:50362.service: Deactivated successfully. Sep 13 00:29:26.677907 systemd[1]: Started sshd@98-78.46.184.112:22-119.1.156.50:51327.service - OpenSSH per-connection server daemon (119.1.156.50:51327). Sep 13 00:29:27.598544 sshd[6483]: Invalid user user from 119.1.156.50 port 51327 Sep 13 00:29:27.820492 sshd[6483]: Connection closed by invalid user user 119.1.156.50 port 51327 [preauth] Sep 13 00:29:27.822051 systemd[1]: sshd@98-78.46.184.112:22-119.1.156.50:51327.service: Deactivated successfully. Sep 13 00:29:28.049791 systemd[1]: Started sshd@99-78.46.184.112:22-119.1.156.50:52252.service - OpenSSH per-connection server daemon (119.1.156.50:52252). Sep 13 00:29:28.953564 sshd[6488]: Invalid user user from 119.1.156.50 port 52252 Sep 13 00:29:29.174326 sshd[6488]: Connection closed by invalid user user 119.1.156.50 port 52252 [preauth] Sep 13 00:29:29.177146 systemd[1]: sshd@99-78.46.184.112:22-119.1.156.50:52252.service: Deactivated successfully. Sep 13 00:29:29.421864 systemd[1]: Started sshd@100-78.46.184.112:22-119.1.156.50:53431.service - OpenSSH per-connection server daemon (119.1.156.50:53431). Sep 13 00:29:30.365998 sshd[6495]: Invalid user user from 119.1.156.50 port 53431 Sep 13 00:29:30.595591 sshd[6495]: Connection closed by invalid user user 119.1.156.50 port 53431 [preauth] Sep 13 00:29:30.599791 systemd[1]: sshd@100-78.46.184.112:22-119.1.156.50:53431.service: Deactivated successfully. Sep 13 00:29:30.822874 systemd[1]: Started sshd@101-78.46.184.112:22-119.1.156.50:54435.service - OpenSSH per-connection server daemon (119.1.156.50:54435). Sep 13 00:29:31.744508 sshd[6502]: Invalid user user from 119.1.156.50 port 54435 Sep 13 00:29:31.969295 sshd[6502]: Connection closed by invalid user user 119.1.156.50 port 54435 [preauth] Sep 13 00:29:31.973726 systemd[1]: sshd@101-78.46.184.112:22-119.1.156.50:54435.service: Deactivated successfully. Sep 13 00:29:32.203819 systemd[1]: Started sshd@102-78.46.184.112:22-119.1.156.50:55224.service - OpenSSH per-connection server daemon (119.1.156.50:55224). Sep 13 00:29:33.131774 sshd[6507]: Invalid user user from 119.1.156.50 port 55224 Sep 13 00:29:33.356444 sshd[6507]: Connection closed by invalid user user 119.1.156.50 port 55224 [preauth] Sep 13 00:29:33.359048 systemd[1]: sshd@102-78.46.184.112:22-119.1.156.50:55224.service: Deactivated successfully. Sep 13 00:29:33.584889 systemd[1]: Started sshd@103-78.46.184.112:22-119.1.156.50:56106.service - OpenSSH per-connection server daemon (119.1.156.50:56106). Sep 13 00:29:34.503057 sshd[6512]: Invalid user user from 119.1.156.50 port 56106 Sep 13 00:29:34.727641 sshd[6512]: Connection closed by invalid user user 119.1.156.50 port 56106 [preauth] Sep 13 00:29:34.732004 systemd[1]: sshd@103-78.46.184.112:22-119.1.156.50:56106.service: Deactivated successfully. Sep 13 00:29:34.938018 systemd[1]: Started sshd@104-78.46.184.112:22-119.1.156.50:57052.service - OpenSSH per-connection server daemon (119.1.156.50:57052). Sep 13 00:29:35.800127 sshd[6517]: Invalid user user from 119.1.156.50 port 57052 Sep 13 00:29:36.010583 sshd[6517]: Connection closed by invalid user user 119.1.156.50 port 57052 [preauth] Sep 13 00:29:36.014760 systemd[1]: sshd@104-78.46.184.112:22-119.1.156.50:57052.service: Deactivated successfully. Sep 13 00:29:36.259984 systemd[1]: Started sshd@105-78.46.184.112:22-119.1.156.50:57987.service - OpenSSH per-connection server daemon (119.1.156.50:57987). Sep 13 00:29:37.193194 sshd[6522]: Invalid user user from 119.1.156.50 port 57987 Sep 13 00:29:37.421912 sshd[6522]: Connection closed by invalid user user 119.1.156.50 port 57987 [preauth] Sep 13 00:29:37.423560 systemd[1]: sshd@105-78.46.184.112:22-119.1.156.50:57987.service: Deactivated successfully. Sep 13 00:29:37.637792 systemd[1]: Started sshd@106-78.46.184.112:22-119.1.156.50:59091.service - OpenSSH per-connection server daemon (119.1.156.50:59091). Sep 13 00:29:38.519633 sshd[6527]: Invalid user user from 119.1.156.50 port 59091 Sep 13 00:29:38.735542 sshd[6527]: Connection closed by invalid user user 119.1.156.50 port 59091 [preauth] Sep 13 00:29:38.737292 systemd[1]: sshd@106-78.46.184.112:22-119.1.156.50:59091.service: Deactivated successfully. Sep 13 00:29:38.978813 systemd[1]: Started sshd@107-78.46.184.112:22-119.1.156.50:59958.service - OpenSSH per-connection server daemon (119.1.156.50:59958). Sep 13 00:29:39.917121 sshd[6546]: Invalid user user from 119.1.156.50 port 59958 Sep 13 00:29:40.144578 sshd[6546]: Connection closed by invalid user user 119.1.156.50 port 59958 [preauth] Sep 13 00:29:40.147807 systemd[1]: sshd@107-78.46.184.112:22-119.1.156.50:59958.service: Deactivated successfully. Sep 13 00:29:40.370157 systemd[1]: Started sshd@108-78.46.184.112:22-119.1.156.50:60980.service - OpenSSH per-connection server daemon (119.1.156.50:60980). Sep 13 00:29:41.278318 sshd[6551]: Invalid user user from 119.1.156.50 port 60980 Sep 13 00:29:41.505485 sshd[6551]: Connection closed by invalid user user 119.1.156.50 port 60980 [preauth] Sep 13 00:29:41.503787 systemd[1]: sshd@108-78.46.184.112:22-119.1.156.50:60980.service: Deactivated successfully. Sep 13 00:29:41.711826 systemd[1]: Started sshd@109-78.46.184.112:22-119.1.156.50:61775.service - OpenSSH per-connection server daemon (119.1.156.50:61775). Sep 13 00:29:42.592820 sshd[6563]: Invalid user user from 119.1.156.50 port 61775 Sep 13 00:29:42.805511 sshd[6563]: Connection closed by invalid user user 119.1.156.50 port 61775 [preauth] Sep 13 00:29:42.809725 systemd[1]: sshd@109-78.46.184.112:22-119.1.156.50:61775.service: Deactivated successfully. Sep 13 00:29:43.053211 systemd[1]: Started sshd@110-78.46.184.112:22-119.1.156.50:62651.service - OpenSSH per-connection server daemon (119.1.156.50:62651). Sep 13 00:29:43.972091 sshd[6568]: Invalid user user from 119.1.156.50 port 62651 Sep 13 00:29:44.201100 sshd[6568]: Connection closed by invalid user user 119.1.156.50 port 62651 [preauth] Sep 13 00:29:44.205405 systemd[1]: sshd@110-78.46.184.112:22-119.1.156.50:62651.service: Deactivated successfully. Sep 13 00:29:44.427012 systemd[1]: Started sshd@111-78.46.184.112:22-119.1.156.50:63595.service - OpenSSH per-connection server daemon (119.1.156.50:63595). Sep 13 00:29:45.330162 sshd[6593]: Invalid user user from 119.1.156.50 port 63595 Sep 13 00:29:45.551195 sshd[6593]: Connection closed by invalid user user 119.1.156.50 port 63595 [preauth] Sep 13 00:29:45.553409 systemd[1]: sshd@111-78.46.184.112:22-119.1.156.50:63595.service: Deactivated successfully. Sep 13 00:29:45.792091 systemd[1]: Started sshd@112-78.46.184.112:22-119.1.156.50:64538.service - OpenSSH per-connection server daemon (119.1.156.50:64538). Sep 13 00:29:46.710532 sshd[6598]: Invalid user user from 119.1.156.50 port 64538 Sep 13 00:29:46.933784 sshd[6598]: Connection closed by invalid user user 119.1.156.50 port 64538 [preauth] Sep 13 00:29:46.937337 systemd[1]: sshd@112-78.46.184.112:22-119.1.156.50:64538.service: Deactivated successfully. Sep 13 00:29:47.168242 systemd[1]: Started sshd@113-78.46.184.112:22-119.1.156.50:65392.service - OpenSSH per-connection server daemon (119.1.156.50:65392). Sep 13 00:29:48.097117 sshd[6603]: Invalid user user from 119.1.156.50 port 65392 Sep 13 00:29:48.321762 sshd[6603]: Connection closed by invalid user user 119.1.156.50 port 65392 [preauth] Sep 13 00:29:48.325745 systemd[1]: sshd@113-78.46.184.112:22-119.1.156.50:65392.service: Deactivated successfully. Sep 13 00:29:48.563265 systemd[1]: Started sshd@114-78.46.184.112:22-119.1.156.50:1871.service - OpenSSH per-connection server daemon (119.1.156.50:1871). Sep 13 00:29:48.755238 systemd[1]: run-containerd-runc-k8s.io-7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7-runc.PtPBZM.mount: Deactivated successfully. Sep 13 00:29:49.493859 sshd[6608]: Invalid user user from 119.1.156.50 port 1871 Sep 13 00:29:49.721538 sshd[6608]: Connection closed by invalid user user 119.1.156.50 port 1871 [preauth] Sep 13 00:29:49.725900 systemd[1]: sshd@114-78.46.184.112:22-119.1.156.50:1871.service: Deactivated successfully. Sep 13 00:29:49.953969 systemd[1]: Started sshd@115-78.46.184.112:22-119.1.156.50:2818.service - OpenSSH per-connection server daemon (119.1.156.50:2818). Sep 13 00:29:50.866822 sshd[6634]: Invalid user user from 119.1.156.50 port 2818 Sep 13 00:29:51.090636 sshd[6634]: Connection closed by invalid user user 119.1.156.50 port 2818 [preauth] Sep 13 00:29:51.094194 systemd[1]: sshd@115-78.46.184.112:22-119.1.156.50:2818.service: Deactivated successfully. Sep 13 00:29:51.332216 systemd[1]: Started sshd@116-78.46.184.112:22-119.1.156.50:3743.service - OpenSSH per-connection server daemon (119.1.156.50:3743). Sep 13 00:29:52.276064 sshd[6639]: Invalid user user from 119.1.156.50 port 3743 Sep 13 00:29:52.506774 sshd[6639]: Connection closed by invalid user user 119.1.156.50 port 3743 [preauth] Sep 13 00:29:52.510665 systemd[1]: sshd@116-78.46.184.112:22-119.1.156.50:3743.service: Deactivated successfully. Sep 13 00:29:52.738009 systemd[1]: Started sshd@117-78.46.184.112:22-119.1.156.50:4628.service - OpenSSH per-connection server daemon (119.1.156.50:4628). Sep 13 00:29:53.663189 sshd[6644]: Invalid user user from 119.1.156.50 port 4628 Sep 13 00:29:53.853679 systemd[1]: run-containerd-runc-k8s.io-7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7-runc.AerUZm.mount: Deactivated successfully. Sep 13 00:29:53.890214 sshd[6644]: Connection closed by invalid user user 119.1.156.50 port 4628 [preauth] Sep 13 00:29:53.895335 systemd[1]: sshd@117-78.46.184.112:22-119.1.156.50:4628.service: Deactivated successfully. Sep 13 00:29:54.117934 systemd[1]: Started sshd@118-78.46.184.112:22-119.1.156.50:5509.service - OpenSSH per-connection server daemon (119.1.156.50:5509). Sep 13 00:29:55.040551 sshd[6690]: Invalid user user from 119.1.156.50 port 5509 Sep 13 00:29:55.264565 sshd[6690]: Connection closed by invalid user user 119.1.156.50 port 5509 [preauth] Sep 13 00:29:55.269973 systemd[1]: sshd@118-78.46.184.112:22-119.1.156.50:5509.service: Deactivated successfully. Sep 13 00:29:55.504033 systemd[1]: Started sshd@119-78.46.184.112:22-119.1.156.50:6465.service - OpenSSH per-connection server daemon (119.1.156.50:6465). Sep 13 00:29:56.418535 sshd[6695]: Invalid user user from 119.1.156.50 port 6465 Sep 13 00:29:56.643134 sshd[6695]: Connection closed by invalid user user 119.1.156.50 port 6465 [preauth] Sep 13 00:29:56.646907 systemd[1]: sshd@119-78.46.184.112:22-119.1.156.50:6465.service: Deactivated successfully. Sep 13 00:29:56.873291 systemd[1]: Started sshd@120-78.46.184.112:22-119.1.156.50:7447.service - OpenSSH per-connection server daemon (119.1.156.50:7447). Sep 13 00:29:57.791624 sshd[6721]: Invalid user user from 119.1.156.50 port 7447 Sep 13 00:29:58.014626 sshd[6721]: Connection closed by invalid user user 119.1.156.50 port 7447 [preauth] Sep 13 00:29:58.018660 systemd[1]: sshd@120-78.46.184.112:22-119.1.156.50:7447.service: Deactivated successfully. Sep 13 00:29:58.249318 systemd[1]: Started sshd@121-78.46.184.112:22-119.1.156.50:8318.service - OpenSSH per-connection server daemon (119.1.156.50:8318). Sep 13 00:29:59.161523 sshd[6726]: Invalid user user from 119.1.156.50 port 8318 Sep 13 00:29:59.177829 systemd[1]: Started sshd@122-78.46.184.112:22-147.75.109.163:39250.service - OpenSSH per-connection server daemon (147.75.109.163:39250). Sep 13 00:29:59.383156 sshd[6726]: Connection closed by invalid user user 119.1.156.50 port 8318 [preauth] Sep 13 00:29:59.387129 systemd[1]: sshd@121-78.46.184.112:22-119.1.156.50:8318.service: Deactivated successfully. Sep 13 00:29:59.598055 systemd[1]: Started sshd@123-78.46.184.112:22-119.1.156.50:9236.service - OpenSSH per-connection server daemon (119.1.156.50:9236). Sep 13 00:30:00.173009 sshd[6733]: Accepted publickey for core from 147.75.109.163 port 39250 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:00.176956 sshd[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:00.183210 systemd-logind[1566]: New session 8 of user core. Sep 13 00:30:00.191311 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:30:00.487449 sshd[6738]: Invalid user user from 119.1.156.50 port 9236 Sep 13 00:30:00.705705 sshd[6738]: Connection closed by invalid user user 119.1.156.50 port 9236 [preauth] Sep 13 00:30:00.711231 systemd[1]: sshd@123-78.46.184.112:22-119.1.156.50:9236.service: Deactivated successfully. Sep 13 00:30:00.933007 systemd[1]: Started sshd@124-78.46.184.112:22-119.1.156.50:10233.service - OpenSSH per-connection server daemon (119.1.156.50:10233). Sep 13 00:30:01.052862 sshd[6733]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:01.062997 systemd[1]: sshd@122-78.46.184.112:22-147.75.109.163:39250.service: Deactivated successfully. Sep 13 00:30:01.074824 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:30:01.076607 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:30:01.078991 systemd-logind[1566]: Removed session 8. Sep 13 00:30:01.846965 sshd[6753]: Invalid user user from 119.1.156.50 port 10233 Sep 13 00:30:02.063385 sshd[6753]: Connection closed by invalid user user 119.1.156.50 port 10233 [preauth] Sep 13 00:30:02.067939 systemd[1]: sshd@124-78.46.184.112:22-119.1.156.50:10233.service: Deactivated successfully. Sep 13 00:30:02.292866 systemd[1]: Started sshd@125-78.46.184.112:22-119.1.156.50:11095.service - OpenSSH per-connection server daemon (119.1.156.50:11095). Sep 13 00:30:03.205185 sshd[6761]: Invalid user user from 119.1.156.50 port 11095 Sep 13 00:30:03.422013 sshd[6761]: Connection closed by invalid user user 119.1.156.50 port 11095 [preauth] Sep 13 00:30:03.425601 systemd[1]: sshd@125-78.46.184.112:22-119.1.156.50:11095.service: Deactivated successfully. Sep 13 00:30:03.658011 systemd[1]: Started sshd@126-78.46.184.112:22-119.1.156.50:11926.service - OpenSSH per-connection server daemon (119.1.156.50:11926). Sep 13 00:30:04.585141 sshd[6767]: Invalid user user from 119.1.156.50 port 11926 Sep 13 00:30:04.802800 sshd[6767]: Connection closed by invalid user user 119.1.156.50 port 11926 [preauth] Sep 13 00:30:04.809112 systemd[1]: sshd@126-78.46.184.112:22-119.1.156.50:11926.service: Deactivated successfully. Sep 13 00:30:05.033840 systemd[1]: Started sshd@127-78.46.184.112:22-119.1.156.50:12875.service - OpenSSH per-connection server daemon (119.1.156.50:12875). Sep 13 00:30:05.935787 sshd[6772]: Invalid user user from 119.1.156.50 port 12875 Sep 13 00:30:06.149858 sshd[6772]: Connection closed by invalid user user 119.1.156.50 port 12875 [preauth] Sep 13 00:30:06.154187 systemd[1]: sshd@127-78.46.184.112:22-119.1.156.50:12875.service: Deactivated successfully. Sep 13 00:30:06.229855 systemd[1]: Started sshd@128-78.46.184.112:22-147.75.109.163:44432.service - OpenSSH per-connection server daemon (147.75.109.163:44432). Sep 13 00:30:06.391029 systemd[1]: Started sshd@129-78.46.184.112:22-119.1.156.50:13882.service - OpenSSH per-connection server daemon (119.1.156.50:13882). Sep 13 00:30:07.239789 sshd[6777]: Accepted publickey for core from 147.75.109.163 port 44432 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:07.241871 sshd[6777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:07.254729 systemd-logind[1566]: New session 9 of user core. Sep 13 00:30:07.260001 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:30:07.312412 sshd[6779]: Invalid user user from 119.1.156.50 port 13882 Sep 13 00:30:07.538004 sshd[6779]: Connection closed by invalid user user 119.1.156.50 port 13882 [preauth] Sep 13 00:30:07.540740 systemd[1]: sshd@129-78.46.184.112:22-119.1.156.50:13882.service: Deactivated successfully. Sep 13 00:30:07.778577 systemd[1]: Started sshd@130-78.46.184.112:22-119.1.156.50:14833.service - OpenSSH per-connection server daemon (119.1.156.50:14833). Sep 13 00:30:08.028819 sshd[6777]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:08.033330 systemd[1]: sshd@128-78.46.184.112:22-147.75.109.163:44432.service: Deactivated successfully. Sep 13 00:30:08.039414 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:30:08.041015 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:30:08.043092 systemd-logind[1566]: Removed session 9. Sep 13 00:30:08.699934 sshd[6793]: Invalid user user from 119.1.156.50 port 14833 Sep 13 00:30:08.925262 sshd[6793]: Connection closed by invalid user user 119.1.156.50 port 14833 [preauth] Sep 13 00:30:08.929018 systemd[1]: sshd@130-78.46.184.112:22-119.1.156.50:14833.service: Deactivated successfully. Sep 13 00:30:09.169961 systemd[1]: Started sshd@131-78.46.184.112:22-119.1.156.50:15730.service - OpenSSH per-connection server daemon (119.1.156.50:15730). Sep 13 00:30:10.120769 sshd[6801]: Invalid user user from 119.1.156.50 port 15730 Sep 13 00:30:10.352232 sshd[6801]: Connection closed by invalid user user 119.1.156.50 port 15730 [preauth] Sep 13 00:30:10.359619 systemd[1]: sshd@131-78.46.184.112:22-119.1.156.50:15730.service: Deactivated successfully. Sep 13 00:30:10.571271 systemd[1]: Started sshd@132-78.46.184.112:22-119.1.156.50:16733.service - OpenSSH per-connection server daemon (119.1.156.50:16733). Sep 13 00:30:11.470543 sshd[6806]: Invalid user user from 119.1.156.50 port 16733 Sep 13 00:30:11.689928 sshd[6806]: Connection closed by invalid user user 119.1.156.50 port 16733 [preauth] Sep 13 00:30:11.692730 systemd[1]: sshd@132-78.46.184.112:22-119.1.156.50:16733.service: Deactivated successfully. Sep 13 00:30:11.929642 systemd[1]: Started sshd@133-78.46.184.112:22-119.1.156.50:17554.service - OpenSSH per-connection server daemon (119.1.156.50:17554). Sep 13 00:30:12.847808 sshd[6811]: Invalid user user from 119.1.156.50 port 17554 Sep 13 00:30:13.068896 sshd[6811]: Connection closed by invalid user user 119.1.156.50 port 17554 [preauth] Sep 13 00:30:13.079403 systemd[1]: sshd@133-78.46.184.112:22-119.1.156.50:17554.service: Deactivated successfully. Sep 13 00:30:13.194844 systemd[1]: Started sshd@134-78.46.184.112:22-147.75.109.163:44218.service - OpenSSH per-connection server daemon (147.75.109.163:44218). Sep 13 00:30:13.294532 systemd[1]: Started sshd@135-78.46.184.112:22-119.1.156.50:18461.service - OpenSSH per-connection server daemon (119.1.156.50:18461). Sep 13 00:30:14.178015 sshd[6818]: Invalid user user from 119.1.156.50 port 18461 Sep 13 00:30:14.193344 sshd[6816]: Accepted publickey for core from 147.75.109.163 port 44218 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:14.197584 sshd[6816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:14.207067 systemd-logind[1566]: New session 10 of user core. Sep 13 00:30:14.211005 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:30:14.395099 sshd[6818]: Connection closed by invalid user user 119.1.156.50 port 18461 [preauth] Sep 13 00:30:14.397778 systemd[1]: sshd@135-78.46.184.112:22-119.1.156.50:18461.service: Deactivated successfully. Sep 13 00:30:14.639512 systemd[1]: Started sshd@136-78.46.184.112:22-119.1.156.50:19439.service - OpenSSH per-connection server daemon (119.1.156.50:19439). Sep 13 00:30:14.952957 sshd[6816]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:14.959435 systemd[1]: sshd@134-78.46.184.112:22-147.75.109.163:44218.service: Deactivated successfully. Sep 13 00:30:14.967019 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:30:14.968776 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:30:14.970322 systemd-logind[1566]: Removed session 10. Sep 13 00:30:15.560660 sshd[6825]: Invalid user user from 119.1.156.50 port 19439 Sep 13 00:30:15.783863 sshd[6825]: Connection closed by invalid user user 119.1.156.50 port 19439 [preauth] Sep 13 00:30:15.791022 systemd[1]: sshd@136-78.46.184.112:22-119.1.156.50:19439.service: Deactivated successfully. Sep 13 00:30:16.033830 systemd[1]: Started sshd@137-78.46.184.112:22-119.1.156.50:20454.service - OpenSSH per-connection server daemon (119.1.156.50:20454). Sep 13 00:30:16.953953 sshd[6841]: Invalid user user from 119.1.156.50 port 20454 Sep 13 00:30:17.178611 sshd[6841]: Connection closed by invalid user user 119.1.156.50 port 20454 [preauth] Sep 13 00:30:17.183246 systemd[1]: sshd@137-78.46.184.112:22-119.1.156.50:20454.service: Deactivated successfully. Sep 13 00:30:17.397074 systemd[1]: Started sshd@138-78.46.184.112:22-119.1.156.50:21392.service - OpenSSH per-connection server daemon (119.1.156.50:21392). Sep 13 00:30:18.281204 sshd[6846]: Invalid user user from 119.1.156.50 port 21392 Sep 13 00:30:18.497365 sshd[6846]: Connection closed by invalid user user 119.1.156.50 port 21392 [preauth] Sep 13 00:30:18.504219 systemd[1]: sshd@138-78.46.184.112:22-119.1.156.50:21392.service: Deactivated successfully. Sep 13 00:30:18.737726 systemd[1]: Started sshd@139-78.46.184.112:22-119.1.156.50:22330.service - OpenSSH per-connection server daemon (119.1.156.50:22330). Sep 13 00:30:19.673066 sshd[6851]: Invalid user user from 119.1.156.50 port 22330 Sep 13 00:30:19.899478 sshd[6851]: Connection closed by invalid user user 119.1.156.50 port 22330 [preauth] Sep 13 00:30:19.910879 systemd[1]: sshd@139-78.46.184.112:22-119.1.156.50:22330.service: Deactivated successfully. Sep 13 00:30:20.140951 systemd[1]: Started sshd@140-78.46.184.112:22-147.75.109.163:44228.service - OpenSSH per-connection server daemon (147.75.109.163:44228). Sep 13 00:30:20.143761 systemd[1]: Started sshd@141-78.46.184.112:22-119.1.156.50:23345.service - OpenSSH per-connection server daemon (119.1.156.50:23345). Sep 13 00:30:21.070299 sshd[6857]: Invalid user user from 119.1.156.50 port 23345 Sep 13 00:30:21.143645 sshd[6856]: Accepted publickey for core from 147.75.109.163 port 44228 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:21.148580 sshd[6856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:21.154889 systemd-logind[1566]: New session 11 of user core. Sep 13 00:30:21.165142 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:30:21.291049 sshd[6857]: Connection closed by invalid user user 119.1.156.50 port 23345 [preauth] Sep 13 00:30:21.298251 systemd[1]: sshd@141-78.46.184.112:22-119.1.156.50:23345.service: Deactivated successfully. Sep 13 00:30:21.540902 systemd[1]: Started sshd@142-78.46.184.112:22-119.1.156.50:24167.service - OpenSSH per-connection server daemon (119.1.156.50:24167). Sep 13 00:30:21.935902 sshd[6856]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:21.940803 systemd[1]: sshd@140-78.46.184.112:22-147.75.109.163:44228.service: Deactivated successfully. Sep 13 00:30:21.946715 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:30:21.950882 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:30:21.952827 systemd-logind[1566]: Removed session 11. Sep 13 00:30:22.478490 sshd[6865]: Invalid user user from 119.1.156.50 port 24167 Sep 13 00:30:22.705716 sshd[6865]: Connection closed by invalid user user 119.1.156.50 port 24167 [preauth] Sep 13 00:30:22.706729 systemd[1]: sshd@142-78.46.184.112:22-119.1.156.50:24167.service: Deactivated successfully. Sep 13 00:30:22.941917 systemd[1]: Started sshd@143-78.46.184.112:22-119.1.156.50:25060.service - OpenSSH per-connection server daemon (119.1.156.50:25060). Sep 13 00:30:23.851391 sshd[6883]: Invalid user user from 119.1.156.50 port 25060 Sep 13 00:30:24.074921 sshd[6883]: Connection closed by invalid user user 119.1.156.50 port 25060 [preauth] Sep 13 00:30:24.079109 systemd[1]: sshd@143-78.46.184.112:22-119.1.156.50:25060.service: Deactivated successfully. Sep 13 00:30:24.308002 systemd[1]: Started sshd@144-78.46.184.112:22-119.1.156.50:25997.service - OpenSSH per-connection server daemon (119.1.156.50:25997). Sep 13 00:30:25.232337 sshd[6930]: Invalid user user from 119.1.156.50 port 25997 Sep 13 00:30:25.458018 sshd[6930]: Connection closed by invalid user user 119.1.156.50 port 25997 [preauth] Sep 13 00:30:25.467239 systemd[1]: sshd@144-78.46.184.112:22-119.1.156.50:25997.service: Deactivated successfully. Sep 13 00:30:25.677913 systemd[1]: Started sshd@145-78.46.184.112:22-119.1.156.50:27075.service - OpenSSH per-connection server daemon (119.1.156.50:27075). Sep 13 00:30:26.571887 sshd[6935]: Invalid user user from 119.1.156.50 port 27075 Sep 13 00:30:26.786213 sshd[6935]: Connection closed by invalid user user 119.1.156.50 port 27075 [preauth] Sep 13 00:30:26.789908 systemd[1]: sshd@145-78.46.184.112:22-119.1.156.50:27075.service: Deactivated successfully. Sep 13 00:30:27.030803 systemd[1]: Started sshd@146-78.46.184.112:22-119.1.156.50:28115.service - OpenSSH per-connection server daemon (119.1.156.50:28115). Sep 13 00:30:27.103931 systemd[1]: Started sshd@147-78.46.184.112:22-147.75.109.163:35232.service - OpenSSH per-connection server daemon (147.75.109.163:35232). Sep 13 00:30:27.945107 sshd[6962]: Invalid user user from 119.1.156.50 port 28115 Sep 13 00:30:28.121366 sshd[6964]: Accepted publickey for core from 147.75.109.163 port 35232 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:28.124595 sshd[6964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:28.135033 systemd-logind[1566]: New session 12 of user core. Sep 13 00:30:28.140027 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:30:28.164671 sshd[6962]: Connection closed by invalid user user 119.1.156.50 port 28115 [preauth] Sep 13 00:30:28.169414 systemd[1]: sshd@146-78.46.184.112:22-119.1.156.50:28115.service: Deactivated successfully. Sep 13 00:30:28.404165 systemd[1]: Started sshd@148-78.46.184.112:22-119.1.156.50:28999.service - OpenSSH per-connection server daemon (119.1.156.50:28999). Sep 13 00:30:28.914384 sshd[6964]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:28.928528 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:30:28.930853 systemd[1]: sshd@147-78.46.184.112:22-147.75.109.163:35232.service: Deactivated successfully. Sep 13 00:30:28.936610 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:30:28.938344 systemd-logind[1566]: Removed session 12. Sep 13 00:30:29.310058 sshd[6971]: Invalid user user from 119.1.156.50 port 28999 Sep 13 00:30:29.533124 sshd[6971]: Connection closed by invalid user user 119.1.156.50 port 28999 [preauth] Sep 13 00:30:29.537845 systemd[1]: sshd@148-78.46.184.112:22-119.1.156.50:28999.service: Deactivated successfully. Sep 13 00:30:29.776926 systemd[1]: Started sshd@149-78.46.184.112:22-119.1.156.50:29956.service - OpenSSH per-connection server daemon (119.1.156.50:29956). Sep 13 00:30:30.704290 sshd[6989]: Invalid user user from 119.1.156.50 port 29956 Sep 13 00:30:30.928902 sshd[6989]: Connection closed by invalid user user 119.1.156.50 port 29956 [preauth] Sep 13 00:30:30.935614 systemd[1]: sshd@149-78.46.184.112:22-119.1.156.50:29956.service: Deactivated successfully. Sep 13 00:30:31.163921 systemd[1]: Started sshd@150-78.46.184.112:22-119.1.156.50:31048.service - OpenSSH per-connection server daemon (119.1.156.50:31048). Sep 13 00:30:32.082162 sshd[6994]: Invalid user user from 119.1.156.50 port 31048 Sep 13 00:30:32.303677 sshd[6994]: Connection closed by invalid user user 119.1.156.50 port 31048 [preauth] Sep 13 00:30:32.317054 systemd[1]: sshd@150-78.46.184.112:22-119.1.156.50:31048.service: Deactivated successfully. Sep 13 00:30:32.530521 systemd[1]: Started sshd@151-78.46.184.112:22-119.1.156.50:31918.service - OpenSSH per-connection server daemon (119.1.156.50:31918). Sep 13 00:30:33.427019 sshd[6999]: Invalid user user from 119.1.156.50 port 31918 Sep 13 00:30:33.642509 sshd[6999]: Connection closed by invalid user user 119.1.156.50 port 31918 [preauth] Sep 13 00:30:33.651508 systemd[1]: sshd@151-78.46.184.112:22-119.1.156.50:31918.service: Deactivated successfully. Sep 13 00:30:33.886985 systemd[1]: Started sshd@152-78.46.184.112:22-119.1.156.50:32726.service - OpenSSH per-connection server daemon (119.1.156.50:32726). Sep 13 00:30:34.079816 systemd[1]: Started sshd@153-78.46.184.112:22-147.75.109.163:41136.service - OpenSSH per-connection server daemon (147.75.109.163:41136). Sep 13 00:30:34.816328 sshd[7004]: Invalid user user from 119.1.156.50 port 32726 Sep 13 00:30:35.043099 sshd[7004]: Connection closed by invalid user user 119.1.156.50 port 32726 [preauth] Sep 13 00:30:35.052969 systemd[1]: sshd@152-78.46.184.112:22-119.1.156.50:32726.service: Deactivated successfully. Sep 13 00:30:35.099448 sshd[7006]: Accepted publickey for core from 147.75.109.163 port 41136 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:35.104314 sshd[7006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:35.122709 systemd-logind[1566]: New session 13 of user core. Sep 13 00:30:35.134546 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:30:35.276317 systemd[1]: Started sshd@154-78.46.184.112:22-119.1.156.50:33689.service - OpenSSH per-connection server daemon (119.1.156.50:33689). Sep 13 00:30:36.000020 sshd[7006]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:36.014660 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:30:36.016603 systemd[1]: sshd@153-78.46.184.112:22-147.75.109.163:41136.service: Deactivated successfully. Sep 13 00:30:36.034430 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:30:36.039430 systemd-logind[1566]: Removed session 13. Sep 13 00:30:36.202542 sshd[7013]: Invalid user user from 119.1.156.50 port 33689 Sep 13 00:30:36.428408 sshd[7013]: Connection closed by invalid user user 119.1.156.50 port 33689 [preauth] Sep 13 00:30:36.432902 systemd[1]: sshd@154-78.46.184.112:22-119.1.156.50:33689.service: Deactivated successfully. Sep 13 00:30:36.655869 systemd[1]: Started sshd@155-78.46.184.112:22-119.1.156.50:34794.service - OpenSSH per-connection server daemon (119.1.156.50:34794). Sep 13 00:30:37.568969 sshd[7029]: Invalid user user from 119.1.156.50 port 34794 Sep 13 00:30:37.789486 sshd[7029]: Connection closed by invalid user user 119.1.156.50 port 34794 [preauth] Sep 13 00:30:37.800579 systemd[1]: sshd@155-78.46.184.112:22-119.1.156.50:34794.service: Deactivated successfully. Sep 13 00:30:38.027907 systemd[1]: Started sshd@156-78.46.184.112:22-119.1.156.50:35791.service - OpenSSH per-connection server daemon (119.1.156.50:35791). Sep 13 00:30:38.966533 sshd[7034]: Invalid user user from 119.1.156.50 port 35791 Sep 13 00:30:39.190846 sshd[7034]: Connection closed by invalid user user 119.1.156.50 port 35791 [preauth] Sep 13 00:30:39.194178 systemd[1]: sshd@156-78.46.184.112:22-119.1.156.50:35791.service: Deactivated successfully. Sep 13 00:30:39.433444 systemd[1]: Started sshd@157-78.46.184.112:22-119.1.156.50:36759.service - OpenSSH per-connection server daemon (119.1.156.50:36759). Sep 13 00:30:40.363380 sshd[7039]: Invalid user user from 119.1.156.50 port 36759 Sep 13 00:30:40.589567 sshd[7039]: Connection closed by invalid user user 119.1.156.50 port 36759 [preauth] Sep 13 00:30:40.594957 systemd[1]: sshd@157-78.46.184.112:22-119.1.156.50:36759.service: Deactivated successfully. Sep 13 00:30:40.827920 systemd[1]: Started sshd@158-78.46.184.112:22-119.1.156.50:37701.service - OpenSSH per-connection server daemon (119.1.156.50:37701). Sep 13 00:30:41.169843 systemd[1]: Started sshd@159-78.46.184.112:22-147.75.109.163:47894.service - OpenSSH per-connection server daemon (147.75.109.163:47894). Sep 13 00:30:41.731872 sshd[7044]: Invalid user user from 119.1.156.50 port 37701 Sep 13 00:30:41.952652 sshd[7044]: Connection closed by invalid user user 119.1.156.50 port 37701 [preauth] Sep 13 00:30:41.958846 systemd[1]: sshd@158-78.46.184.112:22-119.1.156.50:37701.service: Deactivated successfully. Sep 13 00:30:42.167835 systemd[1]: Started sshd@160-78.46.184.112:22-119.1.156.50:38629.service - OpenSSH per-connection server daemon (119.1.156.50:38629). Sep 13 00:30:42.172091 sshd[7046]: Accepted publickey for core from 147.75.109.163 port 47894 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:42.175209 sshd[7046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:42.185565 systemd-logind[1566]: New session 14 of user core. Sep 13 00:30:42.194509 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:30:42.956296 sshd[7046]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:42.962937 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:30:42.964176 systemd[1]: sshd@159-78.46.184.112:22-147.75.109.163:47894.service: Deactivated successfully. Sep 13 00:30:42.969669 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:30:42.971616 systemd-logind[1566]: Removed session 14. Sep 13 00:30:43.041452 sshd[7051]: Invalid user user from 119.1.156.50 port 38629 Sep 13 00:30:43.257563 sshd[7051]: Connection closed by invalid user user 119.1.156.50 port 38629 [preauth] Sep 13 00:30:43.277564 systemd[1]: run-containerd-runc-k8s.io-6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f-runc.4EbEIL.mount: Deactivated successfully. Sep 13 00:30:43.278950 systemd[1]: sshd@160-78.46.184.112:22-119.1.156.50:38629.service: Deactivated successfully. Sep 13 00:30:43.498918 systemd[1]: Started sshd@161-78.46.184.112:22-119.1.156.50:39486.service - OpenSSH per-connection server daemon (119.1.156.50:39486). Sep 13 00:30:44.427062 sshd[7088]: Invalid user user from 119.1.156.50 port 39486 Sep 13 00:30:44.656640 sshd[7088]: Connection closed by invalid user user 119.1.156.50 port 39486 [preauth] Sep 13 00:30:44.662976 systemd[1]: sshd@161-78.46.184.112:22-119.1.156.50:39486.service: Deactivated successfully. Sep 13 00:30:44.889911 systemd[1]: Started sshd@162-78.46.184.112:22-119.1.156.50:40462.service - OpenSSH per-connection server daemon (119.1.156.50:40462). Sep 13 00:30:45.825079 sshd[7099]: Invalid user user from 119.1.156.50 port 40462 Sep 13 00:30:46.053512 sshd[7099]: Connection closed by invalid user user 119.1.156.50 port 40462 [preauth] Sep 13 00:30:46.060068 systemd[1]: sshd@162-78.46.184.112:22-119.1.156.50:40462.service: Deactivated successfully. Sep 13 00:30:46.279972 systemd[1]: Started sshd@163-78.46.184.112:22-119.1.156.50:41503.service - OpenSSH per-connection server daemon (119.1.156.50:41503). Sep 13 00:30:47.187601 sshd[7104]: Invalid user user from 119.1.156.50 port 41503 Sep 13 00:30:47.407545 sshd[7104]: Connection closed by invalid user user 119.1.156.50 port 41503 [preauth] Sep 13 00:30:47.415124 systemd[1]: sshd@163-78.46.184.112:22-119.1.156.50:41503.service: Deactivated successfully. Sep 13 00:30:47.649834 systemd[1]: Started sshd@164-78.46.184.112:22-119.1.156.50:42403.service - OpenSSH per-connection server daemon (119.1.156.50:42403). Sep 13 00:30:48.143635 systemd[1]: Started sshd@165-78.46.184.112:22-147.75.109.163:47906.service - OpenSSH per-connection server daemon (147.75.109.163:47906). Sep 13 00:30:48.591827 sshd[7110]: Invalid user user from 119.1.156.50 port 42403 Sep 13 00:30:48.817204 sshd[7110]: Connection closed by invalid user user 119.1.156.50 port 42403 [preauth] Sep 13 00:30:48.821659 systemd[1]: sshd@164-78.46.184.112:22-119.1.156.50:42403.service: Deactivated successfully. Sep 13 00:30:49.036840 systemd[1]: Started sshd@166-78.46.184.112:22-119.1.156.50:43451.service - OpenSSH per-connection server daemon (119.1.156.50:43451). Sep 13 00:30:49.139827 sshd[7112]: Accepted publickey for core from 147.75.109.163 port 47906 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:49.146205 sshd[7112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:49.158380 systemd-logind[1566]: New session 15 of user core. Sep 13 00:30:49.165052 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:30:49.918756 sshd[7135]: Invalid user user from 119.1.156.50 port 43451 Sep 13 00:30:49.933245 sshd[7112]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:49.939288 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:30:49.942330 systemd[1]: sshd@165-78.46.184.112:22-147.75.109.163:47906.service: Deactivated successfully. Sep 13 00:30:49.947925 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:30:49.950314 systemd-logind[1566]: Removed session 15. Sep 13 00:30:50.134588 sshd[7135]: Connection closed by invalid user user 119.1.156.50 port 43451 [preauth] Sep 13 00:30:50.142150 systemd[1]: sshd@166-78.46.184.112:22-119.1.156.50:43451.service: Deactivated successfully. Sep 13 00:30:50.366852 systemd[1]: Started sshd@167-78.46.184.112:22-119.1.156.50:44388.service - OpenSSH per-connection server daemon (119.1.156.50:44388). Sep 13 00:30:51.279347 sshd[7153]: Invalid user user from 119.1.156.50 port 44388 Sep 13 00:30:51.506650 sshd[7153]: Connection closed by invalid user user 119.1.156.50 port 44388 [preauth] Sep 13 00:30:51.511293 systemd[1]: sshd@167-78.46.184.112:22-119.1.156.50:44388.service: Deactivated successfully. Sep 13 00:30:51.750975 systemd[1]: Started sshd@168-78.46.184.112:22-119.1.156.50:45301.service - OpenSSH per-connection server daemon (119.1.156.50:45301). Sep 13 00:30:52.693915 sshd[7160]: Invalid user user from 119.1.156.50 port 45301 Sep 13 00:30:52.923497 sshd[7160]: Connection closed by invalid user user 119.1.156.50 port 45301 [preauth] Sep 13 00:30:52.928186 systemd[1]: sshd@168-78.46.184.112:22-119.1.156.50:45301.service: Deactivated successfully. Sep 13 00:30:53.149604 systemd[1]: Started sshd@169-78.46.184.112:22-119.1.156.50:46210.service - OpenSSH per-connection server daemon (119.1.156.50:46210). Sep 13 00:30:54.074484 sshd[7166]: Invalid user user from 119.1.156.50 port 46210 Sep 13 00:30:54.298637 sshd[7166]: Connection closed by invalid user user 119.1.156.50 port 46210 [preauth] Sep 13 00:30:54.303338 systemd[1]: sshd@169-78.46.184.112:22-119.1.156.50:46210.service: Deactivated successfully. Sep 13 00:30:54.550149 systemd[1]: Started sshd@170-78.46.184.112:22-119.1.156.50:47108.service - OpenSSH per-connection server daemon (119.1.156.50:47108). Sep 13 00:30:55.109904 systemd[1]: Started sshd@171-78.46.184.112:22-147.75.109.163:56942.service - OpenSSH per-connection server daemon (147.75.109.163:56942). Sep 13 00:30:55.480339 sshd[7212]: Invalid user user from 119.1.156.50 port 47108 Sep 13 00:30:55.705545 sshd[7212]: Connection closed by invalid user user 119.1.156.50 port 47108 [preauth] Sep 13 00:30:55.711285 systemd[1]: sshd@170-78.46.184.112:22-119.1.156.50:47108.service: Deactivated successfully. Sep 13 00:30:55.940371 systemd[1]: Started sshd@172-78.46.184.112:22-119.1.156.50:48137.service - OpenSSH per-connection server daemon (119.1.156.50:48137). Sep 13 00:30:56.117400 sshd[7214]: Accepted publickey for core from 147.75.109.163 port 56942 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:30:56.122143 sshd[7214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:30:56.136421 systemd-logind[1566]: New session 16 of user core. Sep 13 00:30:56.139925 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:30:56.878268 sshd[7242]: Invalid user user from 119.1.156.50 port 48137 Sep 13 00:30:56.914593 sshd[7214]: pam_unix(sshd:session): session closed for user core Sep 13 00:30:56.922029 systemd[1]: sshd@171-78.46.184.112:22-147.75.109.163:56942.service: Deactivated successfully. Sep 13 00:30:56.930818 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:30:56.932421 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:30:56.934829 systemd-logind[1566]: Removed session 16. Sep 13 00:30:57.111334 sshd[7242]: Connection closed by invalid user user 119.1.156.50 port 48137 [preauth] Sep 13 00:30:57.114090 systemd[1]: sshd@172-78.46.184.112:22-119.1.156.50:48137.service: Deactivated successfully. Sep 13 00:30:57.352999 systemd[1]: Started sshd@173-78.46.184.112:22-119.1.156.50:49107.service - OpenSSH per-connection server daemon (119.1.156.50:49107). Sep 13 00:30:58.303831 sshd[7261]: Invalid user user from 119.1.156.50 port 49107 Sep 13 00:30:58.528807 sshd[7261]: Connection closed by invalid user user 119.1.156.50 port 49107 [preauth] Sep 13 00:30:58.532400 systemd[1]: sshd@173-78.46.184.112:22-119.1.156.50:49107.service: Deactivated successfully. Sep 13 00:30:58.769902 systemd[1]: Started sshd@174-78.46.184.112:22-119.1.156.50:50171.service - OpenSSH per-connection server daemon (119.1.156.50:50171). Sep 13 00:30:59.700792 sshd[7268]: Invalid user user from 119.1.156.50 port 50171 Sep 13 00:30:59.930813 sshd[7268]: Connection closed by invalid user user 119.1.156.50 port 50171 [preauth] Sep 13 00:30:59.937408 systemd[1]: sshd@174-78.46.184.112:22-119.1.156.50:50171.service: Deactivated successfully. Sep 13 00:31:00.157098 systemd[1]: Started sshd@175-78.46.184.112:22-119.1.156.50:51201.service - OpenSSH per-connection server daemon (119.1.156.50:51201). Sep 13 00:31:01.060558 sshd[7273]: Invalid user user from 119.1.156.50 port 51201 Sep 13 00:31:01.280564 sshd[7273]: Connection closed by invalid user user 119.1.156.50 port 51201 [preauth] Sep 13 00:31:01.283633 systemd[1]: sshd@175-78.46.184.112:22-119.1.156.50:51201.service: Deactivated successfully. Sep 13 00:31:01.523207 systemd[1]: Started sshd@176-78.46.184.112:22-119.1.156.50:52156.service - OpenSSH per-connection server daemon (119.1.156.50:52156). Sep 13 00:31:02.085856 systemd[1]: Started sshd@177-78.46.184.112:22-147.75.109.163:58624.service - OpenSSH per-connection server daemon (147.75.109.163:58624). Sep 13 00:31:02.454599 sshd[7278]: Invalid user user from 119.1.156.50 port 52156 Sep 13 00:31:02.676605 sshd[7278]: Connection closed by invalid user user 119.1.156.50 port 52156 [preauth] Sep 13 00:31:02.681299 systemd[1]: sshd@176-78.46.184.112:22-119.1.156.50:52156.service: Deactivated successfully. Sep 13 00:31:02.909502 systemd[1]: Started sshd@178-78.46.184.112:22-119.1.156.50:53024.service - OpenSSH per-connection server daemon (119.1.156.50:53024). Sep 13 00:31:03.081940 sshd[7280]: Accepted publickey for core from 147.75.109.163 port 58624 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:03.085893 sshd[7280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:03.098014 systemd-logind[1566]: New session 17 of user core. Sep 13 00:31:03.100939 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:31:03.826183 sshd[7285]: Invalid user user from 119.1.156.50 port 53024 Sep 13 00:31:03.867251 sshd[7280]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:03.875184 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:31:03.875941 systemd[1]: sshd@177-78.46.184.112:22-147.75.109.163:58624.service: Deactivated successfully. Sep 13 00:31:03.882135 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:31:03.884799 systemd-logind[1566]: Removed session 17. Sep 13 00:31:04.047014 sshd[7285]: Connection closed by invalid user user 119.1.156.50 port 53024 [preauth] Sep 13 00:31:04.054129 systemd[1]: sshd@178-78.46.184.112:22-119.1.156.50:53024.service: Deactivated successfully. Sep 13 00:31:04.265493 systemd[1]: Started sshd@179-78.46.184.112:22-119.1.156.50:53851.service - OpenSSH per-connection server daemon (119.1.156.50:53851). Sep 13 00:31:05.137035 sshd[7303]: Invalid user user from 119.1.156.50 port 53851 Sep 13 00:31:05.359494 sshd[7303]: Connection closed by invalid user user 119.1.156.50 port 53851 [preauth] Sep 13 00:31:05.362921 systemd[1]: sshd@179-78.46.184.112:22-119.1.156.50:53851.service: Deactivated successfully. Sep 13 00:31:05.599425 systemd[1]: Started sshd@180-78.46.184.112:22-119.1.156.50:54769.service - OpenSSH per-connection server daemon (119.1.156.50:54769). Sep 13 00:31:06.506881 sshd[7309]: Invalid user user from 119.1.156.50 port 54769 Sep 13 00:31:06.729373 sshd[7309]: Connection closed by invalid user user 119.1.156.50 port 54769 [preauth] Sep 13 00:31:06.729961 systemd[1]: sshd@180-78.46.184.112:22-119.1.156.50:54769.service: Deactivated successfully. Sep 13 00:31:06.948438 systemd[1]: Started sshd@181-78.46.184.112:22-119.1.156.50:55734.service - OpenSSH per-connection server daemon (119.1.156.50:55734). Sep 13 00:31:07.855255 sshd[7314]: Invalid user user from 119.1.156.50 port 55734 Sep 13 00:31:08.074107 sshd[7314]: Connection closed by invalid user user 119.1.156.50 port 55734 [preauth] Sep 13 00:31:08.078502 systemd[1]: sshd@181-78.46.184.112:22-119.1.156.50:55734.service: Deactivated successfully. Sep 13 00:31:08.328653 systemd[1]: Started sshd@182-78.46.184.112:22-119.1.156.50:56616.service - OpenSSH per-connection server daemon (119.1.156.50:56616). Sep 13 00:31:09.037951 systemd[1]: Started sshd@183-78.46.184.112:22-147.75.109.163:58640.service - OpenSSH per-connection server daemon (147.75.109.163:58640). Sep 13 00:31:09.273632 sshd[7319]: Invalid user ubuntu from 119.1.156.50 port 56616 Sep 13 00:31:09.501799 sshd[7319]: Connection closed by invalid user ubuntu 119.1.156.50 port 56616 [preauth] Sep 13 00:31:09.506328 systemd[1]: sshd@182-78.46.184.112:22-119.1.156.50:56616.service: Deactivated successfully. Sep 13 00:31:09.737080 systemd[1]: Started sshd@184-78.46.184.112:22-119.1.156.50:57623.service - OpenSSH per-connection server daemon (119.1.156.50:57623). Sep 13 00:31:10.037274 sshd[7321]: Accepted publickey for core from 147.75.109.163 port 58640 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:10.039277 sshd[7321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:10.046026 systemd-logind[1566]: New session 18 of user core. Sep 13 00:31:10.055010 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:31:10.688528 sshd[7326]: Invalid user ubuntu from 119.1.156.50 port 57623 Sep 13 00:31:10.825835 sshd[7321]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:10.835955 systemd[1]: sshd@183-78.46.184.112:22-147.75.109.163:58640.service: Deactivated successfully. Sep 13 00:31:10.843710 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:31:10.845416 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:31:10.847306 systemd-logind[1566]: Removed session 18. Sep 13 00:31:10.916158 sshd[7326]: Connection closed by invalid user ubuntu 119.1.156.50 port 57623 [preauth] Sep 13 00:31:10.918678 systemd[1]: sshd@184-78.46.184.112:22-119.1.156.50:57623.service: Deactivated successfully. Sep 13 00:31:11.148655 systemd[1]: Started sshd@185-78.46.184.112:22-119.1.156.50:58553.service - OpenSSH per-connection server daemon (119.1.156.50:58553). Sep 13 00:31:12.063775 sshd[7344]: Invalid user ubuntu from 119.1.156.50 port 58553 Sep 13 00:31:12.288822 sshd[7344]: Connection closed by invalid user ubuntu 119.1.156.50 port 58553 [preauth] Sep 13 00:31:12.290045 systemd[1]: sshd@185-78.46.184.112:22-119.1.156.50:58553.service: Deactivated successfully. Sep 13 00:31:12.506881 systemd[1]: Started sshd@186-78.46.184.112:22-119.1.156.50:59468.service - OpenSSH per-connection server daemon (119.1.156.50:59468). Sep 13 00:31:13.411382 sshd[7363]: Invalid user ubuntu from 119.1.156.50 port 59468 Sep 13 00:31:13.628548 sshd[7363]: Connection closed by invalid user ubuntu 119.1.156.50 port 59468 [preauth] Sep 13 00:31:13.632045 systemd[1]: sshd@186-78.46.184.112:22-119.1.156.50:59468.service: Deactivated successfully. Sep 13 00:31:13.865066 systemd[1]: Started sshd@187-78.46.184.112:22-119.1.156.50:60414.service - OpenSSH per-connection server daemon (119.1.156.50:60414). Sep 13 00:31:14.772215 sshd[7376]: Invalid user ubuntu from 119.1.156.50 port 60414 Sep 13 00:31:14.992604 sshd[7376]: Connection closed by invalid user ubuntu 119.1.156.50 port 60414 [preauth] Sep 13 00:31:14.995795 systemd[1]: sshd@187-78.46.184.112:22-119.1.156.50:60414.service: Deactivated successfully. Sep 13 00:31:15.220332 systemd[1]: Started sshd@188-78.46.184.112:22-119.1.156.50:61325.service - OpenSSH per-connection server daemon (119.1.156.50:61325). Sep 13 00:31:15.992503 systemd[1]: Started sshd@189-78.46.184.112:22-147.75.109.163:55984.service - OpenSSH per-connection server daemon (147.75.109.163:55984). Sep 13 00:31:16.133149 sshd[7381]: Invalid user ubuntu from 119.1.156.50 port 61325 Sep 13 00:31:16.357731 sshd[7381]: Connection closed by invalid user ubuntu 119.1.156.50 port 61325 [preauth] Sep 13 00:31:16.356592 systemd[1]: sshd@188-78.46.184.112:22-119.1.156.50:61325.service: Deactivated successfully. Sep 13 00:31:16.594863 systemd[1]: Started sshd@190-78.46.184.112:22-119.1.156.50:62189.service - OpenSSH per-connection server daemon (119.1.156.50:62189). Sep 13 00:31:16.992412 sshd[7383]: Accepted publickey for core from 147.75.109.163 port 55984 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:16.994611 sshd[7383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:17.001531 systemd-logind[1566]: New session 19 of user core. Sep 13 00:31:17.010193 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:31:17.514060 sshd[7388]: Invalid user ubuntu from 119.1.156.50 port 62189 Sep 13 00:31:17.735638 sshd[7388]: Connection closed by invalid user ubuntu 119.1.156.50 port 62189 [preauth] Sep 13 00:31:17.738994 systemd[1]: sshd@190-78.46.184.112:22-119.1.156.50:62189.service: Deactivated successfully. Sep 13 00:31:17.776594 sshd[7383]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:17.784363 systemd[1]: sshd@189-78.46.184.112:22-147.75.109.163:55984.service: Deactivated successfully. Sep 13 00:31:17.789199 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:31:17.790690 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:31:17.792374 systemd-logind[1566]: Removed session 19. Sep 13 00:31:17.981226 systemd[1]: Started sshd@191-78.46.184.112:22-119.1.156.50:63083.service - OpenSSH per-connection server daemon (119.1.156.50:63083). Sep 13 00:31:18.909779 sshd[7406]: Invalid user ubuntu from 119.1.156.50 port 63083 Sep 13 00:31:19.136156 sshd[7406]: Connection closed by invalid user ubuntu 119.1.156.50 port 63083 [preauth] Sep 13 00:31:19.144923 systemd[1]: sshd@191-78.46.184.112:22-119.1.156.50:63083.service: Deactivated successfully. Sep 13 00:31:19.353946 systemd[1]: Started sshd@192-78.46.184.112:22-119.1.156.50:64141.service - OpenSSH per-connection server daemon (119.1.156.50:64141). Sep 13 00:31:20.263298 sshd[7411]: Invalid user ubuntu from 119.1.156.50 port 64141 Sep 13 00:31:20.484155 sshd[7411]: Connection closed by invalid user ubuntu 119.1.156.50 port 64141 [preauth] Sep 13 00:31:20.488438 systemd[1]: sshd@192-78.46.184.112:22-119.1.156.50:64141.service: Deactivated successfully. Sep 13 00:31:20.703927 systemd[1]: Started sshd@193-78.46.184.112:22-119.1.156.50:65167.service - OpenSSH per-connection server daemon (119.1.156.50:65167). Sep 13 00:31:21.584697 sshd[7416]: Invalid user ubuntu from 119.1.156.50 port 65167 Sep 13 00:31:21.801820 sshd[7416]: Connection closed by invalid user ubuntu 119.1.156.50 port 65167 [preauth] Sep 13 00:31:21.805493 systemd[1]: sshd@193-78.46.184.112:22-119.1.156.50:65167.service: Deactivated successfully. Sep 13 00:31:22.048695 systemd[1]: Started sshd@194-78.46.184.112:22-119.1.156.50:1525.service - OpenSSH per-connection server daemon (119.1.156.50:1525). Sep 13 00:31:22.949927 systemd[1]: Started sshd@195-78.46.184.112:22-147.75.109.163:37924.service - OpenSSH per-connection server daemon (147.75.109.163:37924). Sep 13 00:31:22.984012 sshd[7423]: Invalid user ubuntu from 119.1.156.50 port 1525 Sep 13 00:31:23.213268 sshd[7423]: Connection closed by invalid user ubuntu 119.1.156.50 port 1525 [preauth] Sep 13 00:31:23.222008 systemd[1]: sshd@194-78.46.184.112:22-119.1.156.50:1525.service: Deactivated successfully. Sep 13 00:31:23.435150 systemd[1]: Started sshd@196-78.46.184.112:22-119.1.156.50:2365.service - OpenSSH per-connection server daemon (119.1.156.50:2365). Sep 13 00:31:23.959536 sshd[7425]: Accepted publickey for core from 147.75.109.163 port 37924 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:23.961791 sshd[7425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:23.970170 systemd-logind[1566]: New session 20 of user core. Sep 13 00:31:23.975927 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:31:24.332104 sshd[7430]: Invalid user ubuntu from 119.1.156.50 port 2365 Sep 13 00:31:24.550243 sshd[7430]: Connection closed by invalid user ubuntu 119.1.156.50 port 2365 [preauth] Sep 13 00:31:24.554914 systemd[1]: sshd@196-78.46.184.112:22-119.1.156.50:2365.service: Deactivated successfully. Sep 13 00:31:24.752974 sshd[7425]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:24.757283 systemd[1]: sshd@195-78.46.184.112:22-147.75.109.163:37924.service: Deactivated successfully. Sep 13 00:31:24.764740 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:31:24.765815 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:31:24.768992 systemd-logind[1566]: Removed session 20. Sep 13 00:31:24.781402 systemd[1]: Started sshd@197-78.46.184.112:22-119.1.156.50:3261.service - OpenSSH per-connection server daemon (119.1.156.50:3261). Sep 13 00:31:25.677147 sshd[7488]: Invalid user ubuntu from 119.1.156.50 port 3261 Sep 13 00:31:25.897256 sshd[7488]: Connection closed by invalid user ubuntu 119.1.156.50 port 3261 [preauth] Sep 13 00:31:25.900507 systemd[1]: sshd@197-78.46.184.112:22-119.1.156.50:3261.service: Deactivated successfully. Sep 13 00:31:26.128930 systemd[1]: Started sshd@198-78.46.184.112:22-119.1.156.50:4106.service - OpenSSH per-connection server daemon (119.1.156.50:4106). Sep 13 00:31:27.041027 sshd[7515]: Invalid user ubuntu from 119.1.156.50 port 4106 Sep 13 00:31:27.262854 sshd[7515]: Connection closed by invalid user ubuntu 119.1.156.50 port 4106 [preauth] Sep 13 00:31:27.267770 systemd[1]: sshd@198-78.46.184.112:22-119.1.156.50:4106.service: Deactivated successfully. Sep 13 00:31:27.498064 systemd[1]: Started sshd@199-78.46.184.112:22-119.1.156.50:5123.service - OpenSSH per-connection server daemon (119.1.156.50:5123). Sep 13 00:31:28.402588 sshd[7520]: Invalid user ubuntu from 119.1.156.50 port 5123 Sep 13 00:31:28.624165 sshd[7520]: Connection closed by invalid user ubuntu 119.1.156.50 port 5123 [preauth] Sep 13 00:31:28.626307 systemd[1]: sshd@199-78.46.184.112:22-119.1.156.50:5123.service: Deactivated successfully. Sep 13 00:31:28.869586 systemd[1]: Started sshd@200-78.46.184.112:22-119.1.156.50:6034.service - OpenSSH per-connection server daemon (119.1.156.50:6034). Sep 13 00:31:29.801685 sshd[7527]: Invalid user ubuntu from 119.1.156.50 port 6034 Sep 13 00:31:29.927047 systemd[1]: Started sshd@201-78.46.184.112:22-147.75.109.163:37928.service - OpenSSH per-connection server daemon (147.75.109.163:37928). Sep 13 00:31:30.034561 sshd[7527]: Connection closed by invalid user ubuntu 119.1.156.50 port 6034 [preauth] Sep 13 00:31:30.037315 systemd[1]: sshd@200-78.46.184.112:22-119.1.156.50:6034.service: Deactivated successfully. Sep 13 00:31:30.267299 systemd[1]: Started sshd@202-78.46.184.112:22-119.1.156.50:6966.service - OpenSSH per-connection server daemon (119.1.156.50:6966). Sep 13 00:31:30.919553 sshd[7529]: Accepted publickey for core from 147.75.109.163 port 37928 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:30.921792 sshd[7529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:30.934432 systemd-logind[1566]: New session 21 of user core. Sep 13 00:31:30.938905 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:31:31.176633 sshd[7534]: Invalid user ubuntu from 119.1.156.50 port 6966 Sep 13 00:31:31.398139 sshd[7534]: Connection closed by invalid user ubuntu 119.1.156.50 port 6966 [preauth] Sep 13 00:31:31.404658 systemd[1]: sshd@202-78.46.184.112:22-119.1.156.50:6966.service: Deactivated successfully. Sep 13 00:31:31.627832 systemd[1]: Started sshd@203-78.46.184.112:22-119.1.156.50:7983.service - OpenSSH per-connection server daemon (119.1.156.50:7983). Sep 13 00:31:31.691313 sshd[7529]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:31.696953 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:31:31.697216 systemd[1]: sshd@201-78.46.184.112:22-147.75.109.163:37928.service: Deactivated successfully. Sep 13 00:31:31.703444 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:31:31.706227 systemd-logind[1566]: Removed session 21. Sep 13 00:31:32.529889 sshd[7549]: Invalid user ubuntu from 119.1.156.50 port 7983 Sep 13 00:31:32.749889 sshd[7549]: Connection closed by invalid user ubuntu 119.1.156.50 port 7983 [preauth] Sep 13 00:31:32.753340 systemd[1]: sshd@203-78.46.184.112:22-119.1.156.50:7983.service: Deactivated successfully. Sep 13 00:31:32.987780 systemd[1]: Started sshd@204-78.46.184.112:22-119.1.156.50:8829.service - OpenSSH per-connection server daemon (119.1.156.50:8829). Sep 13 00:31:33.924768 sshd[7557]: Invalid user ubuntu from 119.1.156.50 port 8829 Sep 13 00:31:34.152968 sshd[7557]: Connection closed by invalid user ubuntu 119.1.156.50 port 8829 [preauth] Sep 13 00:31:34.154787 systemd[1]: sshd@204-78.46.184.112:22-119.1.156.50:8829.service: Deactivated successfully. Sep 13 00:31:34.390951 systemd[1]: Started sshd@205-78.46.184.112:22-119.1.156.50:9777.service - OpenSSH per-connection server daemon (119.1.156.50:9777). Sep 13 00:31:35.306175 sshd[7562]: Invalid user ubuntu from 119.1.156.50 port 9777 Sep 13 00:31:35.529318 sshd[7562]: Connection closed by invalid user ubuntu 119.1.156.50 port 9777 [preauth] Sep 13 00:31:35.532858 systemd[1]: sshd@205-78.46.184.112:22-119.1.156.50:9777.service: Deactivated successfully. Sep 13 00:31:35.745898 systemd[1]: Started sshd@206-78.46.184.112:22-119.1.156.50:10698.service - OpenSSH per-connection server daemon (119.1.156.50:10698). Sep 13 00:31:36.620811 sshd[7567]: Invalid user ubuntu from 119.1.156.50 port 10698 Sep 13 00:31:36.833556 sshd[7567]: Connection closed by invalid user ubuntu 119.1.156.50 port 10698 [preauth] Sep 13 00:31:36.839072 systemd[1]: sshd@206-78.46.184.112:22-119.1.156.50:10698.service: Deactivated successfully. Sep 13 00:31:36.863993 systemd[1]: Started sshd@207-78.46.184.112:22-147.75.109.163:32822.service - OpenSSH per-connection server daemon (147.75.109.163:32822). Sep 13 00:31:37.077868 systemd[1]: Started sshd@208-78.46.184.112:22-119.1.156.50:11634.service - OpenSSH per-connection server daemon (119.1.156.50:11634). Sep 13 00:31:37.840137 sshd[7572]: Accepted publickey for core from 147.75.109.163 port 32822 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:37.845422 sshd[7572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:37.853258 systemd-logind[1566]: New session 22 of user core. Sep 13 00:31:37.857661 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:31:38.005085 sshd[7575]: Invalid user ubuntu from 119.1.156.50 port 11634 Sep 13 00:31:38.228308 sshd[7575]: Connection closed by invalid user ubuntu 119.1.156.50 port 11634 [preauth] Sep 13 00:31:38.234074 systemd[1]: sshd@208-78.46.184.112:22-119.1.156.50:11634.service: Deactivated successfully. Sep 13 00:31:38.452169 systemd[1]: Started sshd@209-78.46.184.112:22-119.1.156.50:12587.service - OpenSSH per-connection server daemon (119.1.156.50:12587). Sep 13 00:31:38.642927 sshd[7572]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:38.653628 systemd[1]: sshd@207-78.46.184.112:22-147.75.109.163:32822.service: Deactivated successfully. Sep 13 00:31:38.664442 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:31:38.668381 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:31:38.672143 systemd-logind[1566]: Removed session 22. Sep 13 00:31:39.359701 sshd[7591]: Invalid user ubuntu from 119.1.156.50 port 12587 Sep 13 00:31:39.579638 sshd[7591]: Connection closed by invalid user ubuntu 119.1.156.50 port 12587 [preauth] Sep 13 00:31:39.582798 systemd[1]: sshd@209-78.46.184.112:22-119.1.156.50:12587.service: Deactivated successfully. Sep 13 00:31:39.824916 systemd[1]: Started sshd@210-78.46.184.112:22-119.1.156.50:13543.service - OpenSSH per-connection server daemon (119.1.156.50:13543). Sep 13 00:31:40.759666 sshd[7599]: Invalid user ubuntu from 119.1.156.50 port 13543 Sep 13 00:31:40.986885 sshd[7599]: Connection closed by invalid user ubuntu 119.1.156.50 port 13543 [preauth] Sep 13 00:31:40.989944 systemd[1]: sshd@210-78.46.184.112:22-119.1.156.50:13543.service: Deactivated successfully. Sep 13 00:31:41.226946 systemd[1]: Started sshd@211-78.46.184.112:22-119.1.156.50:14588.service - OpenSSH per-connection server daemon (119.1.156.50:14588). Sep 13 00:31:42.150290 sshd[7604]: Invalid user ubuntu from 119.1.156.50 port 14588 Sep 13 00:31:42.377514 sshd[7604]: Connection closed by invalid user ubuntu 119.1.156.50 port 14588 [preauth] Sep 13 00:31:42.380642 systemd[1]: sshd@211-78.46.184.112:22-119.1.156.50:14588.service: Deactivated successfully. Sep 13 00:31:42.606933 systemd[1]: Started sshd@212-78.46.184.112:22-119.1.156.50:15441.service - OpenSSH per-connection server daemon (119.1.156.50:15441). Sep 13 00:31:43.259978 systemd[1]: run-containerd-runc-k8s.io-6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f-runc.nGBUjI.mount: Deactivated successfully. Sep 13 00:31:43.521536 sshd[7609]: Invalid user ubuntu from 119.1.156.50 port 15441 Sep 13 00:31:43.749096 sshd[7609]: Connection closed by invalid user ubuntu 119.1.156.50 port 15441 [preauth] Sep 13 00:31:43.754839 systemd[1]: sshd@212-78.46.184.112:22-119.1.156.50:15441.service: Deactivated successfully. Sep 13 00:31:43.823561 systemd[1]: Started sshd@213-78.46.184.112:22-147.75.109.163:44894.service - OpenSSH per-connection server daemon (147.75.109.163:44894). Sep 13 00:31:43.985233 systemd[1]: Started sshd@214-78.46.184.112:22-119.1.156.50:16451.service - OpenSSH per-connection server daemon (119.1.156.50:16451). Sep 13 00:31:44.813978 sshd[7633]: Accepted publickey for core from 147.75.109.163 port 44894 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:44.815613 sshd[7633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:44.826670 systemd-logind[1566]: New session 23 of user core. Sep 13 00:31:44.831949 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:31:44.885405 sshd[7635]: Invalid user ubuntu from 119.1.156.50 port 16451 Sep 13 00:31:45.106996 sshd[7635]: Connection closed by invalid user ubuntu 119.1.156.50 port 16451 [preauth] Sep 13 00:31:45.111425 systemd[1]: sshd@214-78.46.184.112:22-119.1.156.50:16451.service: Deactivated successfully. Sep 13 00:31:45.335982 systemd[1]: Started sshd@215-78.46.184.112:22-119.1.156.50:17355.service - OpenSSH per-connection server daemon (119.1.156.50:17355). Sep 13 00:31:45.606062 sshd[7633]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:45.614866 systemd[1]: sshd@213-78.46.184.112:22-147.75.109.163:44894.service: Deactivated successfully. Sep 13 00:31:45.622155 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:31:45.622977 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:31:45.625605 systemd-logind[1566]: Removed session 23. Sep 13 00:31:46.175992 systemd[1]: Started sshd@216-78.46.184.112:22-141.98.10.225:52006.service - OpenSSH per-connection server daemon (141.98.10.225:52006). Sep 13 00:31:46.244231 sshd[7643]: Invalid user ubuntu from 119.1.156.50 port 17355 Sep 13 00:31:46.424032 sshd[7654]: Received disconnect from 141.98.10.225 port 52006:11: [preauth] Sep 13 00:31:46.424032 sshd[7654]: Disconnected from 141.98.10.225 port 52006 [preauth] Sep 13 00:31:46.427788 systemd[1]: sshd@216-78.46.184.112:22-141.98.10.225:52006.service: Deactivated successfully. Sep 13 00:31:46.465520 sshd[7643]: Connection closed by invalid user ubuntu 119.1.156.50 port 17355 [preauth] Sep 13 00:31:46.469421 systemd[1]: sshd@215-78.46.184.112:22-119.1.156.50:17355.service: Deactivated successfully. Sep 13 00:31:46.697854 systemd[1]: Started sshd@217-78.46.184.112:22-119.1.156.50:18201.service - OpenSSH per-connection server daemon (119.1.156.50:18201). Sep 13 00:31:47.610569 sshd[7662]: Invalid user ubuntu from 119.1.156.50 port 18201 Sep 13 00:31:47.837987 sshd[7662]: Connection closed by invalid user ubuntu 119.1.156.50 port 18201 [preauth] Sep 13 00:31:47.845401 systemd[1]: sshd@217-78.46.184.112:22-119.1.156.50:18201.service: Deactivated successfully. Sep 13 00:31:48.047793 systemd[1]: Started sshd@218-78.46.184.112:22-119.1.156.50:19104.service - OpenSSH per-connection server daemon (119.1.156.50:19104). Sep 13 00:31:48.944183 sshd[7667]: Invalid user ubuntu from 119.1.156.50 port 19104 Sep 13 00:31:49.170128 sshd[7667]: Connection closed by invalid user ubuntu 119.1.156.50 port 19104 [preauth] Sep 13 00:31:49.177507 systemd[1]: sshd@218-78.46.184.112:22-119.1.156.50:19104.service: Deactivated successfully. Sep 13 00:31:49.427942 systemd[1]: Started sshd@219-78.46.184.112:22-119.1.156.50:20203.service - OpenSSH per-connection server daemon (119.1.156.50:20203). Sep 13 00:31:50.367342 sshd[7691]: Invalid user ubuntu from 119.1.156.50 port 20203 Sep 13 00:31:50.598748 sshd[7691]: Connection closed by invalid user ubuntu 119.1.156.50 port 20203 [preauth] Sep 13 00:31:50.603814 systemd[1]: sshd@219-78.46.184.112:22-119.1.156.50:20203.service: Deactivated successfully. Sep 13 00:31:50.774880 systemd[1]: Started sshd@220-78.46.184.112:22-147.75.109.163:45132.service - OpenSSH per-connection server daemon (147.75.109.163:45132). Sep 13 00:31:50.822900 systemd[1]: Started sshd@221-78.46.184.112:22-119.1.156.50:21220.service - OpenSSH per-connection server daemon (119.1.156.50:21220). Sep 13 00:31:51.741848 sshd[7698]: Invalid user ubuntu from 119.1.156.50 port 21220 Sep 13 00:31:51.764662 sshd[7696]: Accepted publickey for core from 147.75.109.163 port 45132 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:51.767505 sshd[7696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:51.777060 systemd-logind[1566]: New session 24 of user core. Sep 13 00:31:51.790491 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:31:51.962671 sshd[7698]: Connection closed by invalid user ubuntu 119.1.156.50 port 21220 [preauth] Sep 13 00:31:51.964988 systemd[1]: sshd@221-78.46.184.112:22-119.1.156.50:21220.service: Deactivated successfully. Sep 13 00:31:52.189153 systemd[1]: Started sshd@222-78.46.184.112:22-119.1.156.50:22127.service - OpenSSH per-connection server daemon (119.1.156.50:22127). Sep 13 00:31:52.540532 sshd[7696]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:52.546344 systemd[1]: sshd@220-78.46.184.112:22-147.75.109.163:45132.service: Deactivated successfully. Sep 13 00:31:52.552666 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:31:52.553894 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:31:52.555102 systemd-logind[1566]: Removed session 24. Sep 13 00:31:53.098733 sshd[7706]: Invalid user ubuntu from 119.1.156.50 port 22127 Sep 13 00:31:53.317390 sshd[7706]: Connection closed by invalid user ubuntu 119.1.156.50 port 22127 [preauth] Sep 13 00:31:53.322682 systemd[1]: sshd@222-78.46.184.112:22-119.1.156.50:22127.service: Deactivated successfully. Sep 13 00:31:53.558019 systemd[1]: Started sshd@223-78.46.184.112:22-119.1.156.50:23070.service - OpenSSH per-connection server daemon (119.1.156.50:23070). Sep 13 00:31:54.474857 sshd[7721]: Invalid user ubuntu from 119.1.156.50 port 23070 Sep 13 00:31:54.699981 sshd[7721]: Connection closed by invalid user ubuntu 119.1.156.50 port 23070 [preauth] Sep 13 00:31:54.702292 systemd[1]: sshd@223-78.46.184.112:22-119.1.156.50:23070.service: Deactivated successfully. Sep 13 00:31:54.947122 systemd[1]: Started sshd@224-78.46.184.112:22-119.1.156.50:24038.service - OpenSSH per-connection server daemon (119.1.156.50:24038). Sep 13 00:31:55.874044 sshd[7765]: Invalid user ubuntu from 119.1.156.50 port 24038 Sep 13 00:31:56.099972 sshd[7765]: Connection closed by invalid user ubuntu 119.1.156.50 port 24038 [preauth] Sep 13 00:31:56.103694 systemd[1]: sshd@224-78.46.184.112:22-119.1.156.50:24038.service: Deactivated successfully. Sep 13 00:31:56.320124 systemd[1]: Started sshd@225-78.46.184.112:22-119.1.156.50:25052.service - OpenSSH per-connection server daemon (119.1.156.50:25052). Sep 13 00:31:57.205070 sshd[7792]: Invalid user ubuntu from 119.1.156.50 port 25052 Sep 13 00:31:57.424858 sshd[7792]: Connection closed by invalid user ubuntu 119.1.156.50 port 25052 [preauth] Sep 13 00:31:57.425405 systemd[1]: sshd@225-78.46.184.112:22-119.1.156.50:25052.service: Deactivated successfully. Sep 13 00:31:57.675554 systemd[1]: Started sshd@226-78.46.184.112:22-119.1.156.50:26009.service - OpenSSH per-connection server daemon (119.1.156.50:26009). Sep 13 00:31:57.708943 systemd[1]: Started sshd@227-78.46.184.112:22-147.75.109.163:45144.service - OpenSSH per-connection server daemon (147.75.109.163:45144). Sep 13 00:31:58.617749 sshd[7797]: Invalid user ubuntu from 119.1.156.50 port 26009 Sep 13 00:31:58.696082 sshd[7798]: Accepted publickey for core from 147.75.109.163 port 45144 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:31:58.698420 sshd[7798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:31:58.704998 systemd-logind[1566]: New session 25 of user core. Sep 13 00:31:58.711941 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:31:58.843785 sshd[7797]: Connection closed by invalid user ubuntu 119.1.156.50 port 26009 [preauth] Sep 13 00:31:58.848541 systemd[1]: sshd@226-78.46.184.112:22-119.1.156.50:26009.service: Deactivated successfully. Sep 13 00:31:59.070183 systemd[1]: Started sshd@228-78.46.184.112:22-119.1.156.50:27007.service - OpenSSH per-connection server daemon (119.1.156.50:27007). Sep 13 00:31:59.477001 sshd[7798]: pam_unix(sshd:session): session closed for user core Sep 13 00:31:59.484831 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:31:59.487313 systemd[1]: sshd@227-78.46.184.112:22-147.75.109.163:45144.service: Deactivated successfully. Sep 13 00:31:59.492166 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:31:59.494537 systemd-logind[1566]: Removed session 25. Sep 13 00:31:59.983567 sshd[7808]: Invalid user ubuntu from 119.1.156.50 port 27007 Sep 13 00:32:00.204672 sshd[7808]: Connection closed by invalid user ubuntu 119.1.156.50 port 27007 [preauth] Sep 13 00:32:00.211413 systemd[1]: sshd@228-78.46.184.112:22-119.1.156.50:27007.service: Deactivated successfully. Sep 13 00:32:00.451969 systemd[1]: Started sshd@229-78.46.184.112:22-119.1.156.50:27994.service - OpenSSH per-connection server daemon (119.1.156.50:27994). Sep 13 00:32:01.379126 sshd[7824]: Invalid user ubuntu from 119.1.156.50 port 27994 Sep 13 00:32:01.606587 sshd[7824]: Connection closed by invalid user ubuntu 119.1.156.50 port 27994 [preauth] Sep 13 00:32:01.609909 systemd[1]: sshd@229-78.46.184.112:22-119.1.156.50:27994.service: Deactivated successfully. Sep 13 00:32:01.843648 systemd[1]: Started sshd@230-78.46.184.112:22-119.1.156.50:29140.service - OpenSSH per-connection server daemon (119.1.156.50:29140). Sep 13 00:32:02.772527 sshd[7829]: Invalid user ubuntu from 119.1.156.50 port 29140 Sep 13 00:32:02.999957 sshd[7829]: Connection closed by invalid user ubuntu 119.1.156.50 port 29140 [preauth] Sep 13 00:32:03.005442 systemd[1]: sshd@230-78.46.184.112:22-119.1.156.50:29140.service: Deactivated successfully. Sep 13 00:32:03.224832 systemd[1]: Started sshd@231-78.46.184.112:22-119.1.156.50:30134.service - OpenSSH per-connection server daemon (119.1.156.50:30134). Sep 13 00:32:04.112306 sshd[7834]: Invalid user ubuntu from 119.1.156.50 port 30134 Sep 13 00:32:04.328743 sshd[7834]: Connection closed by invalid user ubuntu 119.1.156.50 port 30134 [preauth] Sep 13 00:32:04.333079 systemd[1]: sshd@231-78.46.184.112:22-119.1.156.50:30134.service: Deactivated successfully. Sep 13 00:32:04.567397 systemd[1]: Started sshd@232-78.46.184.112:22-119.1.156.50:31000.service - OpenSSH per-connection server daemon (119.1.156.50:31000). Sep 13 00:32:04.643902 systemd[1]: Started sshd@233-78.46.184.112:22-147.75.109.163:59838.service - OpenSSH per-connection server daemon (147.75.109.163:59838). Sep 13 00:32:05.473358 sshd[7839]: Invalid user ubuntu from 119.1.156.50 port 31000 Sep 13 00:32:05.637300 sshd[7841]: Accepted publickey for core from 147.75.109.163 port 59838 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:32:05.640733 sshd[7841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:32:05.651817 systemd-logind[1566]: New session 26 of user core. Sep 13 00:32:05.657187 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:32:05.694560 sshd[7839]: Connection closed by invalid user ubuntu 119.1.156.50 port 31000 [preauth] Sep 13 00:32:05.698004 systemd[1]: sshd@232-78.46.184.112:22-119.1.156.50:31000.service: Deactivated successfully. Sep 13 00:32:05.911020 systemd[1]: Started sshd@234-78.46.184.112:22-119.1.156.50:31900.service - OpenSSH per-connection server daemon (119.1.156.50:31900). Sep 13 00:32:06.426514 sshd[7841]: pam_unix(sshd:session): session closed for user core Sep 13 00:32:06.433898 systemd[1]: sshd@233-78.46.184.112:22-147.75.109.163:59838.service: Deactivated successfully. Sep 13 00:32:06.439611 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:32:06.440093 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:32:06.442103 systemd-logind[1566]: Removed session 26. Sep 13 00:32:06.792566 sshd[7849]: Invalid user ubuntu from 119.1.156.50 port 31900 Sep 13 00:32:07.007554 sshd[7849]: Connection closed by invalid user ubuntu 119.1.156.50 port 31900 [preauth] Sep 13 00:32:07.011605 systemd[1]: sshd@234-78.46.184.112:22-119.1.156.50:31900.service: Deactivated successfully. Sep 13 00:32:07.232009 systemd[1]: Started sshd@235-78.46.184.112:22-119.1.156.50:32841.service - OpenSSH per-connection server daemon (119.1.156.50:32841). Sep 13 00:32:08.119667 sshd[7865]: Invalid user ubuntu from 119.1.156.50 port 32841 Sep 13 00:32:08.333489 sshd[7865]: Connection closed by invalid user ubuntu 119.1.156.50 port 32841 [preauth] Sep 13 00:32:08.341014 systemd[1]: sshd@235-78.46.184.112:22-119.1.156.50:32841.service: Deactivated successfully. Sep 13 00:32:08.571662 systemd[1]: Started sshd@236-78.46.184.112:22-119.1.156.50:34042.service - OpenSSH per-connection server daemon (119.1.156.50:34042). Sep 13 00:32:09.469494 sshd[7870]: Invalid user ubuntu from 119.1.156.50 port 34042 Sep 13 00:32:09.687605 sshd[7870]: Connection closed by invalid user ubuntu 119.1.156.50 port 34042 [preauth] Sep 13 00:32:09.694799 systemd[1]: sshd@236-78.46.184.112:22-119.1.156.50:34042.service: Deactivated successfully. Sep 13 00:32:09.924915 systemd[1]: Started sshd@237-78.46.184.112:22-119.1.156.50:35093.service - OpenSSH per-connection server daemon (119.1.156.50:35093). Sep 13 00:32:10.865896 sshd[7875]: Invalid user ubuntu from 119.1.156.50 port 35093 Sep 13 00:32:11.085619 sshd[7875]: Connection closed by invalid user ubuntu 119.1.156.50 port 35093 [preauth] Sep 13 00:32:11.090613 systemd[1]: sshd@237-78.46.184.112:22-119.1.156.50:35093.service: Deactivated successfully. Sep 13 00:32:11.318114 systemd[1]: Started sshd@238-78.46.184.112:22-119.1.156.50:36076.service - OpenSSH per-connection server daemon (119.1.156.50:36076). Sep 13 00:32:11.591055 systemd[1]: Started sshd@239-78.46.184.112:22-147.75.109.163:54756.service - OpenSSH per-connection server daemon (147.75.109.163:54756). Sep 13 00:32:12.242951 sshd[7880]: Invalid user ubuntu from 119.1.156.50 port 36076 Sep 13 00:32:12.466028 sshd[7880]: Connection closed by invalid user ubuntu 119.1.156.50 port 36076 [preauth] Sep 13 00:32:12.470177 systemd[1]: sshd@238-78.46.184.112:22-119.1.156.50:36076.service: Deactivated successfully. Sep 13 00:32:12.587915 sshd[7882]: Accepted publickey for core from 147.75.109.163 port 54756 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:32:12.590282 sshd[7882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:32:12.598416 systemd-logind[1566]: New session 27 of user core. Sep 13 00:32:12.609168 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:32:12.701137 systemd[1]: Started sshd@240-78.46.184.112:22-119.1.156.50:36987.service - OpenSSH per-connection server daemon (119.1.156.50:36987). Sep 13 00:32:13.367790 sshd[7882]: pam_unix(sshd:session): session closed for user core Sep 13 00:32:13.373637 systemd[1]: sshd@239-78.46.184.112:22-147.75.109.163:54756.service: Deactivated successfully. Sep 13 00:32:13.379187 systemd-logind[1566]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:32:13.381141 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:32:13.385133 systemd-logind[1566]: Removed session 27. Sep 13 00:32:13.618706 sshd[7889]: Invalid user ubuntu from 119.1.156.50 port 36987 Sep 13 00:32:13.847922 sshd[7889]: Connection closed by invalid user ubuntu 119.1.156.50 port 36987 [preauth] Sep 13 00:32:13.850288 systemd[1]: sshd@240-78.46.184.112:22-119.1.156.50:36987.service: Deactivated successfully. Sep 13 00:32:14.089961 systemd[1]: Started sshd@241-78.46.184.112:22-119.1.156.50:38005.service - OpenSSH per-connection server daemon (119.1.156.50:38005). Sep 13 00:32:15.038365 sshd[7905]: Invalid user ubuntu from 119.1.156.50 port 38005 Sep 13 00:32:15.264549 sshd[7905]: Connection closed by invalid user ubuntu 119.1.156.50 port 38005 [preauth] Sep 13 00:32:15.272534 systemd[1]: sshd@241-78.46.184.112:22-119.1.156.50:38005.service: Deactivated successfully. Sep 13 00:32:15.476653 systemd[1]: Started sshd@242-78.46.184.112:22-119.1.156.50:39034.service - OpenSSH per-connection server daemon (119.1.156.50:39034). Sep 13 00:32:16.364845 sshd[7910]: Invalid user ubuntu from 119.1.156.50 port 39034 Sep 13 00:32:16.578325 sshd[7910]: Connection closed by invalid user ubuntu 119.1.156.50 port 39034 [preauth] Sep 13 00:32:16.585588 systemd[1]: sshd@242-78.46.184.112:22-119.1.156.50:39034.service: Deactivated successfully. Sep 13 00:32:16.823005 systemd[1]: Started sshd@243-78.46.184.112:22-119.1.156.50:39936.service - OpenSSH per-connection server daemon (119.1.156.50:39936). Sep 13 00:32:17.720482 sshd[7915]: Invalid user ubuntu from 119.1.156.50 port 39936 Sep 13 00:32:17.939401 sshd[7915]: Connection closed by invalid user ubuntu 119.1.156.50 port 39936 [preauth] Sep 13 00:32:17.944576 systemd[1]: sshd@243-78.46.184.112:22-119.1.156.50:39936.service: Deactivated successfully. Sep 13 00:32:18.182903 systemd[1]: Started sshd@244-78.46.184.112:22-119.1.156.50:40869.service - OpenSSH per-connection server daemon (119.1.156.50:40869). Sep 13 00:32:18.541096 systemd[1]: Started sshd@245-78.46.184.112:22-147.75.109.163:54770.service - OpenSSH per-connection server daemon (147.75.109.163:54770). Sep 13 00:32:19.129724 sshd[7920]: Invalid user ubuntu from 119.1.156.50 port 40869 Sep 13 00:32:19.360483 sshd[7920]: Connection closed by invalid user ubuntu 119.1.156.50 port 40869 [preauth] Sep 13 00:32:19.364763 systemd[1]: sshd@244-78.46.184.112:22-119.1.156.50:40869.service: Deactivated successfully. Sep 13 00:32:19.524761 sshd[7922]: Accepted publickey for core from 147.75.109.163 port 54770 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:32:19.529615 sshd[7922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:32:19.545696 systemd-logind[1566]: New session 28 of user core. Sep 13 00:32:19.548241 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 00:32:19.593470 systemd[1]: Started sshd@246-78.46.184.112:22-119.1.156.50:41916.service - OpenSSH per-connection server daemon (119.1.156.50:41916). Sep 13 00:32:20.296489 sshd[7922]: pam_unix(sshd:session): session closed for user core Sep 13 00:32:20.304109 systemd[1]: sshd@245-78.46.184.112:22-147.75.109.163:54770.service: Deactivated successfully. Sep 13 00:32:20.308355 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:32:20.312207 systemd-logind[1566]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:32:20.314357 systemd-logind[1566]: Removed session 28. Sep 13 00:32:20.511238 sshd[7929]: Invalid user ubuntu from 119.1.156.50 port 41916 Sep 13 00:32:20.735242 sshd[7929]: Connection closed by invalid user ubuntu 119.1.156.50 port 41916 [preauth] Sep 13 00:32:20.739088 systemd[1]: sshd@246-78.46.184.112:22-119.1.156.50:41916.service: Deactivated successfully. Sep 13 00:32:20.970913 systemd[1]: Started sshd@247-78.46.184.112:22-119.1.156.50:42890.service - OpenSSH per-connection server daemon (119.1.156.50:42890). Sep 13 00:32:21.900712 sshd[7945]: Invalid user ubuntu from 119.1.156.50 port 42890 Sep 13 00:32:22.125257 sshd[7945]: Connection closed by invalid user ubuntu 119.1.156.50 port 42890 [preauth] Sep 13 00:32:22.129754 systemd[1]: sshd@247-78.46.184.112:22-119.1.156.50:42890.service: Deactivated successfully. Sep 13 00:32:22.338786 systemd[1]: Started sshd@248-78.46.184.112:22-119.1.156.50:43985.service - OpenSSH per-connection server daemon (119.1.156.50:43985). Sep 13 00:32:23.227169 sshd[7952]: Invalid user ubuntu from 119.1.156.50 port 43985 Sep 13 00:32:23.445482 sshd[7952]: Connection closed by invalid user ubuntu 119.1.156.50 port 43985 [preauth] Sep 13 00:32:23.445208 systemd[1]: sshd@248-78.46.184.112:22-119.1.156.50:43985.service: Deactivated successfully. Sep 13 00:32:23.690850 systemd[1]: Started sshd@249-78.46.184.112:22-119.1.156.50:44794.service - OpenSSH per-connection server daemon (119.1.156.50:44794). Sep 13 00:32:24.615213 sshd[7957]: Invalid user ubuntu from 119.1.156.50 port 44794 Sep 13 00:32:24.842147 sshd[7957]: Connection closed by invalid user ubuntu 119.1.156.50 port 44794 [preauth] Sep 13 00:32:24.846114 systemd[1]: sshd@249-78.46.184.112:22-119.1.156.50:44794.service: Deactivated successfully. Sep 13 00:32:25.080813 systemd[1]: Started sshd@250-78.46.184.112:22-119.1.156.50:45692.service - OpenSSH per-connection server daemon (119.1.156.50:45692). Sep 13 00:32:25.458831 systemd[1]: Started sshd@251-78.46.184.112:22-147.75.109.163:52814.service - OpenSSH per-connection server daemon (147.75.109.163:52814). Sep 13 00:32:26.032564 sshd[8002]: Invalid user ubuntu from 119.1.156.50 port 45692 Sep 13 00:32:26.260658 sshd[8002]: Connection closed by invalid user ubuntu 119.1.156.50 port 45692 [preauth] Sep 13 00:32:26.266903 systemd[1]: sshd@250-78.46.184.112:22-119.1.156.50:45692.service: Deactivated successfully. Sep 13 00:32:26.444531 sshd[8004]: Accepted publickey for core from 147.75.109.163 port 52814 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:32:26.447634 sshd[8004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:32:26.456607 systemd-logind[1566]: New session 29 of user core. Sep 13 00:32:26.469239 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 00:32:26.477918 systemd[1]: Started sshd@252-78.46.184.112:22-119.1.156.50:46693.service - OpenSSH per-connection server daemon (119.1.156.50:46693). Sep 13 00:32:27.215798 sshd[8004]: pam_unix(sshd:session): session closed for user core Sep 13 00:32:27.223806 systemd[1]: sshd@251-78.46.184.112:22-147.75.109.163:52814.service: Deactivated successfully. Sep 13 00:32:27.230682 systemd-logind[1566]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:32:27.230882 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:32:27.234057 systemd-logind[1566]: Removed session 29. Sep 13 00:32:27.377802 sshd[8032]: Invalid user ubuntu from 119.1.156.50 port 46693 Sep 13 00:32:27.595612 sshd[8032]: Connection closed by invalid user ubuntu 119.1.156.50 port 46693 [preauth] Sep 13 00:32:27.597828 systemd[1]: sshd@252-78.46.184.112:22-119.1.156.50:46693.service: Deactivated successfully. Sep 13 00:32:27.834853 systemd[1]: Started sshd@253-78.46.184.112:22-119.1.156.50:47573.service - OpenSSH per-connection server daemon (119.1.156.50:47573). Sep 13 00:32:28.754675 sshd[8048]: Invalid user ubuntu from 119.1.156.50 port 47573 Sep 13 00:32:28.979718 sshd[8048]: Connection closed by invalid user ubuntu 119.1.156.50 port 47573 [preauth] Sep 13 00:32:28.986563 systemd[1]: sshd@253-78.46.184.112:22-119.1.156.50:47573.service: Deactivated successfully. Sep 13 00:32:29.207436 systemd[1]: Started sshd@254-78.46.184.112:22-119.1.156.50:48537.service - OpenSSH per-connection server daemon (119.1.156.50:48537). Sep 13 00:32:30.132420 sshd[8055]: Invalid user ubuntu from 119.1.156.50 port 48537 Sep 13 00:32:30.354552 sshd[8055]: Connection closed by invalid user ubuntu 119.1.156.50 port 48537 [preauth] Sep 13 00:32:30.358167 systemd[1]: sshd@254-78.46.184.112:22-119.1.156.50:48537.service: Deactivated successfully. Sep 13 00:32:30.573857 systemd[1]: Started sshd@255-78.46.184.112:22-119.1.156.50:49549.service - OpenSSH per-connection server daemon (119.1.156.50:49549). Sep 13 00:32:31.473122 sshd[8060]: Invalid user ubuntu from 119.1.156.50 port 49549 Sep 13 00:32:31.693192 sshd[8060]: Connection closed by invalid user ubuntu 119.1.156.50 port 49549 [preauth] Sep 13 00:32:31.694895 systemd[1]: sshd@255-78.46.184.112:22-119.1.156.50:49549.service: Deactivated successfully. Sep 13 00:32:31.926896 systemd[1]: Started sshd@256-78.46.184.112:22-119.1.156.50:50561.service - OpenSSH per-connection server daemon (119.1.156.50:50561). Sep 13 00:32:32.387796 systemd[1]: Started sshd@257-78.46.184.112:22-147.75.109.163:35682.service - OpenSSH per-connection server daemon (147.75.109.163:35682). Sep 13 00:32:32.823781 sshd[8065]: Invalid user ubuntu from 119.1.156.50 port 50561 Sep 13 00:32:33.042666 sshd[8065]: Connection closed by invalid user ubuntu 119.1.156.50 port 50561 [preauth] Sep 13 00:32:33.047537 systemd[1]: sshd@256-78.46.184.112:22-119.1.156.50:50561.service: Deactivated successfully. Sep 13 00:32:33.270011 systemd[1]: Started sshd@258-78.46.184.112:22-119.1.156.50:51411.service - OpenSSH per-connection server daemon (119.1.156.50:51411). Sep 13 00:32:33.380346 sshd[8067]: Accepted publickey for core from 147.75.109.163 port 35682 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:32:33.382942 sshd[8067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:32:33.392271 systemd-logind[1566]: New session 30 of user core. Sep 13 00:32:33.401042 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 13 00:32:34.144862 sshd[8067]: pam_unix(sshd:session): session closed for user core Sep 13 00:32:34.151785 systemd-logind[1566]: Session 30 logged out. Waiting for processes to exit. Sep 13 00:32:34.152349 systemd[1]: sshd@257-78.46.184.112:22-147.75.109.163:35682.service: Deactivated successfully. Sep 13 00:32:34.157438 systemd[1]: session-30.scope: Deactivated successfully. Sep 13 00:32:34.164070 sshd[8073]: Invalid user ubuntu from 119.1.156.50 port 51411 Sep 13 00:32:34.164640 systemd-logind[1566]: Removed session 30. Sep 13 00:32:34.382567 sshd[8073]: Connection closed by invalid user ubuntu 119.1.156.50 port 51411 [preauth] Sep 13 00:32:34.385312 systemd[1]: sshd@258-78.46.184.112:22-119.1.156.50:51411.service: Deactivated successfully. Sep 13 00:32:34.610286 systemd[1]: Started sshd@259-78.46.184.112:22-119.1.156.50:52290.service - OpenSSH per-connection server daemon (119.1.156.50:52290). Sep 13 00:32:35.527785 sshd[8091]: Invalid user ubuntu from 119.1.156.50 port 52290 Sep 13 00:32:35.747083 sshd[8091]: Connection closed by invalid user ubuntu 119.1.156.50 port 52290 [preauth] Sep 13 00:32:35.750168 systemd[1]: sshd@259-78.46.184.112:22-119.1.156.50:52290.service: Deactivated successfully. Sep 13 00:32:35.990934 systemd[1]: Started sshd@260-78.46.184.112:22-119.1.156.50:53221.service - OpenSSH per-connection server daemon (119.1.156.50:53221). Sep 13 00:32:36.934808 sshd[8096]: Invalid user ubuntu from 119.1.156.50 port 53221 Sep 13 00:32:37.161281 sshd[8096]: Connection closed by invalid user ubuntu 119.1.156.50 port 53221 [preauth] Sep 13 00:32:37.164794 systemd[1]: sshd@260-78.46.184.112:22-119.1.156.50:53221.service: Deactivated successfully. Sep 13 00:32:37.369919 systemd[1]: Started sshd@261-78.46.184.112:22-119.1.156.50:54145.service - OpenSSH per-connection server daemon (119.1.156.50:54145). Sep 13 00:32:38.248852 sshd[8101]: Invalid user ubuntu from 119.1.156.50 port 54145 Sep 13 00:32:38.460626 sshd[8101]: Connection closed by invalid user ubuntu 119.1.156.50 port 54145 [preauth] Sep 13 00:32:38.464408 systemd[1]: sshd@261-78.46.184.112:22-119.1.156.50:54145.service: Deactivated successfully. Sep 13 00:32:38.709883 systemd[1]: Started sshd@262-78.46.184.112:22-119.1.156.50:55040.service - OpenSSH per-connection server daemon (119.1.156.50:55040). Sep 13 00:32:39.317234 systemd[1]: Started sshd@263-78.46.184.112:22-147.75.109.163:35698.service - OpenSSH per-connection server daemon (147.75.109.163:35698). Sep 13 00:32:39.638562 sshd[8106]: Invalid user ubuntu from 119.1.156.50 port 55040 Sep 13 00:32:39.864991 sshd[8106]: Connection closed by invalid user ubuntu 119.1.156.50 port 55040 [preauth] Sep 13 00:32:39.872222 systemd[1]: sshd@262-78.46.184.112:22-119.1.156.50:55040.service: Deactivated successfully. Sep 13 00:32:40.094433 systemd[1]: Started sshd@264-78.46.184.112:22-119.1.156.50:56104.service - OpenSSH per-connection server daemon (119.1.156.50:56104). Sep 13 00:32:40.316533 sshd[8108]: Accepted publickey for core from 147.75.109.163 port 35698 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:32:40.319429 sshd[8108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:32:40.326152 systemd-logind[1566]: New session 31 of user core. Sep 13 00:32:40.329847 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 13 00:32:41.016232 sshd[8113]: Invalid user ubuntu from 119.1.156.50 port 56104 Sep 13 00:32:41.099218 sshd[8108]: pam_unix(sshd:session): session closed for user core Sep 13 00:32:41.108801 systemd-logind[1566]: Session 31 logged out. Waiting for processes to exit. Sep 13 00:32:41.109998 systemd[1]: sshd@263-78.46.184.112:22-147.75.109.163:35698.service: Deactivated successfully. Sep 13 00:32:41.116951 systemd[1]: session-31.scope: Deactivated successfully. Sep 13 00:32:41.119110 systemd-logind[1566]: Removed session 31. Sep 13 00:32:41.237490 sshd[8113]: Connection closed by invalid user ubuntu 119.1.156.50 port 56104 [preauth] Sep 13 00:32:41.240881 systemd[1]: sshd@264-78.46.184.112:22-119.1.156.50:56104.service: Deactivated successfully. Sep 13 00:32:41.472992 systemd[1]: Started sshd@265-78.46.184.112:22-119.1.156.50:57004.service - OpenSSH per-connection server daemon (119.1.156.50:57004). Sep 13 00:32:42.411265 sshd[8131]: Invalid user ubuntu from 119.1.156.50 port 57004 Sep 13 00:32:42.637418 sshd[8131]: Connection closed by invalid user ubuntu 119.1.156.50 port 57004 [preauth] Sep 13 00:32:42.641655 systemd[1]: sshd@265-78.46.184.112:22-119.1.156.50:57004.service: Deactivated successfully. Sep 13 00:32:42.877303 systemd[1]: Started sshd@266-78.46.184.112:22-119.1.156.50:58060.service - OpenSSH per-connection server daemon (119.1.156.50:58060). Sep 13 00:32:43.809123 sshd[8136]: Invalid user ubuntu from 119.1.156.50 port 58060 Sep 13 00:32:44.036576 sshd[8136]: Connection closed by invalid user ubuntu 119.1.156.50 port 58060 [preauth] Sep 13 00:32:44.042556 systemd[1]: sshd@266-78.46.184.112:22-119.1.156.50:58060.service: Deactivated successfully. Sep 13 00:32:44.260919 systemd[1]: Started sshd@267-78.46.184.112:22-119.1.156.50:59103.service - OpenSSH per-connection server daemon (119.1.156.50:59103). Sep 13 00:32:45.164897 sshd[8163]: Invalid user ubuntu from 119.1.156.50 port 59103 Sep 13 00:32:45.386801 sshd[8163]: Connection closed by invalid user ubuntu 119.1.156.50 port 59103 [preauth] Sep 13 00:32:45.391598 systemd[1]: sshd@267-78.46.184.112:22-119.1.156.50:59103.service: Deactivated successfully. Sep 13 00:32:45.624177 systemd[1]: Started sshd@268-78.46.184.112:22-119.1.156.50:60006.service - OpenSSH per-connection server daemon (119.1.156.50:60006). Sep 13 00:32:46.260901 systemd[1]: Started sshd@269-78.46.184.112:22-147.75.109.163:60006.service - OpenSSH per-connection server daemon (147.75.109.163:60006). Sep 13 00:32:46.534791 sshd[8168]: Invalid user ubuntu from 119.1.156.50 port 60006 Sep 13 00:32:46.757625 sshd[8168]: Connection closed by invalid user ubuntu 119.1.156.50 port 60006 [preauth] Sep 13 00:32:46.761565 systemd[1]: sshd@268-78.46.184.112:22-119.1.156.50:60006.service: Deactivated successfully. Sep 13 00:32:47.012182 systemd[1]: Started sshd@270-78.46.184.112:22-119.1.156.50:60785.service - OpenSSH per-connection server daemon (119.1.156.50:60785). Sep 13 00:32:47.235410 sshd[8170]: Accepted publickey for core from 147.75.109.163 port 60006 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:32:47.237397 sshd[8170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:32:47.245073 systemd-logind[1566]: New session 32 of user core. Sep 13 00:32:47.250934 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 13 00:32:47.929190 sshd[8189]: Invalid user ubuntu from 119.1.156.50 port 60785 Sep 13 00:32:47.986109 sshd[8170]: pam_unix(sshd:session): session closed for user core Sep 13 00:32:47.990935 systemd[1]: sshd@269-78.46.184.112:22-147.75.109.163:60006.service: Deactivated successfully. Sep 13 00:32:47.996062 systemd[1]: session-32.scope: Deactivated successfully. Sep 13 00:32:47.997269 systemd-logind[1566]: Session 32 logged out. Waiting for processes to exit. Sep 13 00:32:47.998630 systemd-logind[1566]: Removed session 32. Sep 13 00:32:48.153233 sshd[8189]: Connection closed by invalid user ubuntu 119.1.156.50 port 60785 [preauth] Sep 13 00:32:48.155244 systemd[1]: sshd@270-78.46.184.112:22-119.1.156.50:60785.service: Deactivated successfully. Sep 13 00:32:48.380822 systemd[1]: Started sshd@271-78.46.184.112:22-119.1.156.50:61714.service - OpenSSH per-connection server daemon (119.1.156.50:61714). Sep 13 00:32:49.289482 sshd[8208]: Invalid user ubuntu from 119.1.156.50 port 61714 Sep 13 00:32:49.512389 sshd[8208]: Connection closed by invalid user ubuntu 119.1.156.50 port 61714 [preauth] Sep 13 00:32:49.518089 systemd[1]: sshd@271-78.46.184.112:22-119.1.156.50:61714.service: Deactivated successfully. Sep 13 00:32:49.750838 systemd[1]: Started sshd@272-78.46.184.112:22-119.1.156.50:62693.service - OpenSSH per-connection server daemon (119.1.156.50:62693). Sep 13 00:32:50.695408 sshd[8239]: Invalid user ubuntu from 119.1.156.50 port 62693 Sep 13 00:32:50.923083 sshd[8239]: Connection closed by invalid user ubuntu 119.1.156.50 port 62693 [preauth] Sep 13 00:32:50.927055 systemd[1]: sshd@272-78.46.184.112:22-119.1.156.50:62693.service: Deactivated successfully. Sep 13 00:32:51.145810 systemd[1]: Started sshd@273-78.46.184.112:22-119.1.156.50:63694.service - OpenSSH per-connection server daemon (119.1.156.50:63694). Sep 13 00:32:52.038792 sshd[8244]: Invalid user ubuntu from 119.1.156.50 port 63694 Sep 13 00:32:52.256703 sshd[8244]: Connection closed by invalid user ubuntu 119.1.156.50 port 63694 [preauth] Sep 13 00:32:52.262285 systemd[1]: sshd@273-78.46.184.112:22-119.1.156.50:63694.service: Deactivated successfully. Sep 13 00:32:52.489939 systemd[1]: Started sshd@274-78.46.184.112:22-119.1.156.50:64648.service - OpenSSH per-connection server daemon (119.1.156.50:64648). Sep 13 00:32:53.156905 systemd[1]: Started sshd@275-78.46.184.112:22-147.75.109.163:51920.service - OpenSSH per-connection server daemon (147.75.109.163:51920). Sep 13 00:32:53.852270 systemd[1]: run-containerd-runc-k8s.io-6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f-runc.vZUak5.mount: Deactivated successfully. Sep 13 00:32:54.050945 sshd[8249]: Invalid user ubuntu from 119.1.156.50 port 64648 Sep 13 00:32:54.142549 sshd[8251]: Accepted publickey for core from 147.75.109.163 port 51920 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:32:54.144970 sshd[8251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:32:54.152175 systemd-logind[1566]: New session 33 of user core. Sep 13 00:32:54.160079 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 13 00:32:54.272898 sshd[8249]: Connection closed by invalid user ubuntu 119.1.156.50 port 64648 [preauth] Sep 13 00:32:54.277093 systemd[1]: sshd@274-78.46.184.112:22-119.1.156.50:64648.service: Deactivated successfully. Sep 13 00:32:54.502840 systemd[1]: Started sshd@276-78.46.184.112:22-119.1.156.50:1534.service - OpenSSH per-connection server daemon (119.1.156.50:1534). Sep 13 00:32:54.905755 sshd[8251]: pam_unix(sshd:session): session closed for user core Sep 13 00:32:54.912661 systemd[1]: sshd@275-78.46.184.112:22-147.75.109.163:51920.service: Deactivated successfully. Sep 13 00:32:54.918015 systemd-logind[1566]: Session 33 logged out. Waiting for processes to exit. Sep 13 00:32:54.918727 systemd[1]: session-33.scope: Deactivated successfully. Sep 13 00:32:54.920354 systemd-logind[1566]: Removed session 33. Sep 13 00:32:55.406238 sshd[8300]: Invalid user ubuntu from 119.1.156.50 port 1534 Sep 13 00:32:55.627528 sshd[8300]: Connection closed by invalid user ubuntu 119.1.156.50 port 1534 [preauth] Sep 13 00:32:55.631929 systemd[1]: sshd@276-78.46.184.112:22-119.1.156.50:1534.service: Deactivated successfully. Sep 13 00:32:55.844667 systemd[1]: Started sshd@277-78.46.184.112:22-119.1.156.50:2476.service - OpenSSH per-connection server daemon (119.1.156.50:2476). Sep 13 00:32:56.748240 sshd[8338]: Invalid user ubuntu from 119.1.156.50 port 2476 Sep 13 00:32:56.962643 sshd[8338]: Connection closed by invalid user ubuntu 119.1.156.50 port 2476 [preauth] Sep 13 00:32:56.965909 systemd[1]: sshd@277-78.46.184.112:22-119.1.156.50:2476.service: Deactivated successfully. Sep 13 00:32:57.218014 systemd[1]: Started sshd@278-78.46.184.112:22-119.1.156.50:3423.service - OpenSSH per-connection server daemon (119.1.156.50:3423). Sep 13 00:32:58.152352 sshd[8344]: Invalid user ubuntu from 119.1.156.50 port 3423 Sep 13 00:32:58.379603 sshd[8344]: Connection closed by invalid user ubuntu 119.1.156.50 port 3423 [preauth] Sep 13 00:32:58.383214 systemd[1]: sshd@278-78.46.184.112:22-119.1.156.50:3423.service: Deactivated successfully. Sep 13 00:32:58.610070 systemd[1]: Started sshd@279-78.46.184.112:22-119.1.156.50:4336.service - OpenSSH per-connection server daemon (119.1.156.50:4336). Sep 13 00:32:59.525312 sshd[8349]: Invalid user ubuntu from 119.1.156.50 port 4336 Sep 13 00:32:59.749331 sshd[8349]: Connection closed by invalid user ubuntu 119.1.156.50 port 4336 [preauth] Sep 13 00:32:59.752349 systemd[1]: sshd@279-78.46.184.112:22-119.1.156.50:4336.service: Deactivated successfully. Sep 13 00:32:59.983903 systemd[1]: Started sshd@280-78.46.184.112:22-119.1.156.50:5256.service - OpenSSH per-connection server daemon (119.1.156.50:5256). Sep 13 00:33:00.069980 systemd[1]: Started sshd@281-78.46.184.112:22-147.75.109.163:51928.service - OpenSSH per-connection server daemon (147.75.109.163:51928). Sep 13 00:33:00.908517 sshd[8356]: Invalid user ubuntu from 119.1.156.50 port 5256 Sep 13 00:33:01.046209 sshd[8358]: Accepted publickey for core from 147.75.109.163 port 51928 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:01.048256 sshd[8358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:01.054980 systemd-logind[1566]: New session 34 of user core. Sep 13 00:33:01.059955 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 13 00:33:01.134570 sshd[8356]: Connection closed by invalid user ubuntu 119.1.156.50 port 5256 [preauth] Sep 13 00:33:01.135799 systemd[1]: sshd@280-78.46.184.112:22-119.1.156.50:5256.service: Deactivated successfully. Sep 13 00:33:01.363805 systemd[1]: Started sshd@282-78.46.184.112:22-119.1.156.50:6400.service - OpenSSH per-connection server daemon (119.1.156.50:6400). Sep 13 00:33:01.806073 sshd[8358]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:01.815684 systemd[1]: sshd@281-78.46.184.112:22-147.75.109.163:51928.service: Deactivated successfully. Sep 13 00:33:01.818682 systemd[1]: session-34.scope: Deactivated successfully. Sep 13 00:33:01.820328 systemd-logind[1566]: Session 34 logged out. Waiting for processes to exit. Sep 13 00:33:01.822149 systemd-logind[1566]: Removed session 34. Sep 13 00:33:02.279852 sshd[8365]: Invalid user debian from 119.1.156.50 port 6400 Sep 13 00:33:02.501514 sshd[8365]: Connection closed by invalid user debian 119.1.156.50 port 6400 [preauth] Sep 13 00:33:02.507151 systemd[1]: sshd@282-78.46.184.112:22-119.1.156.50:6400.service: Deactivated successfully. Sep 13 00:33:02.736869 systemd[1]: Started sshd@283-78.46.184.112:22-119.1.156.50:7416.service - OpenSSH per-connection server daemon (119.1.156.50:7416). Sep 13 00:33:03.659838 sshd[8381]: Invalid user debian from 119.1.156.50 port 7416 Sep 13 00:33:03.884371 sshd[8381]: Connection closed by invalid user debian 119.1.156.50 port 7416 [preauth] Sep 13 00:33:03.887637 systemd[1]: sshd@283-78.46.184.112:22-119.1.156.50:7416.service: Deactivated successfully. Sep 13 00:33:04.110432 systemd[1]: Started sshd@284-78.46.184.112:22-119.1.156.50:8316.service - OpenSSH per-connection server daemon (119.1.156.50:8316). Sep 13 00:33:05.010319 sshd[8386]: Invalid user debian from 119.1.156.50 port 8316 Sep 13 00:33:05.230752 sshd[8386]: Connection closed by invalid user debian 119.1.156.50 port 8316 [preauth] Sep 13 00:33:05.233585 systemd[1]: sshd@284-78.46.184.112:22-119.1.156.50:8316.service: Deactivated successfully. Sep 13 00:33:05.472935 systemd[1]: Started sshd@285-78.46.184.112:22-119.1.156.50:9196.service - OpenSSH per-connection server daemon (119.1.156.50:9196). Sep 13 00:33:06.392358 sshd[8391]: Invalid user debian from 119.1.156.50 port 9196 Sep 13 00:33:06.617200 sshd[8391]: Connection closed by invalid user debian 119.1.156.50 port 9196 [preauth] Sep 13 00:33:06.619860 systemd[1]: sshd@285-78.46.184.112:22-119.1.156.50:9196.service: Deactivated successfully. Sep 13 00:33:06.829833 systemd[1]: Started sshd@286-78.46.184.112:22-119.1.156.50:10131.service - OpenSSH per-connection server daemon (119.1.156.50:10131). Sep 13 00:33:06.974903 systemd[1]: Started sshd@287-78.46.184.112:22-147.75.109.163:41602.service - OpenSSH per-connection server daemon (147.75.109.163:41602). Sep 13 00:33:07.721053 sshd[8396]: Invalid user debian from 119.1.156.50 port 10131 Sep 13 00:33:07.938491 sshd[8396]: Connection closed by invalid user debian 119.1.156.50 port 10131 [preauth] Sep 13 00:33:07.940623 systemd[1]: sshd@286-78.46.184.112:22-119.1.156.50:10131.service: Deactivated successfully. Sep 13 00:33:07.953532 sshd[8398]: Accepted publickey for core from 147.75.109.163 port 41602 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:07.957544 sshd[8398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:07.971896 systemd-logind[1566]: New session 35 of user core. Sep 13 00:33:07.977630 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 13 00:33:08.175235 systemd[1]: Started sshd@288-78.46.184.112:22-119.1.156.50:11084.service - OpenSSH per-connection server daemon (119.1.156.50:11084). Sep 13 00:33:08.711167 sshd[8398]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:08.717441 systemd-logind[1566]: Session 35 logged out. Waiting for processes to exit. Sep 13 00:33:08.718484 systemd[1]: sshd@287-78.46.184.112:22-147.75.109.163:41602.service: Deactivated successfully. Sep 13 00:33:08.720351 systemd[1]: session-35.scope: Deactivated successfully. Sep 13 00:33:08.724007 systemd-logind[1566]: Removed session 35. Sep 13 00:33:09.103625 sshd[8405]: Invalid user debian from 119.1.156.50 port 11084 Sep 13 00:33:09.327208 sshd[8405]: Connection closed by invalid user debian 119.1.156.50 port 11084 [preauth] Sep 13 00:33:09.329803 systemd[1]: sshd@288-78.46.184.112:22-119.1.156.50:11084.service: Deactivated successfully. Sep 13 00:33:09.540990 systemd[1]: Started sshd@289-78.46.184.112:22-119.1.156.50:12092.service - OpenSSH per-connection server daemon (119.1.156.50:12092). Sep 13 00:33:10.423821 sshd[8420]: Invalid user debian from 119.1.156.50 port 12092 Sep 13 00:33:10.636798 sshd[8420]: Connection closed by invalid user debian 119.1.156.50 port 12092 [preauth] Sep 13 00:33:10.641380 systemd[1]: sshd@289-78.46.184.112:22-119.1.156.50:12092.service: Deactivated successfully. Sep 13 00:33:10.867959 systemd[1]: Started sshd@290-78.46.184.112:22-119.1.156.50:12870.service - OpenSSH per-connection server daemon (119.1.156.50:12870). Sep 13 00:33:11.761891 sshd[8425]: Invalid user debian from 119.1.156.50 port 12870 Sep 13 00:33:11.981926 sshd[8425]: Connection closed by invalid user debian 119.1.156.50 port 12870 [preauth] Sep 13 00:33:11.985324 systemd[1]: sshd@290-78.46.184.112:22-119.1.156.50:12870.service: Deactivated successfully. Sep 13 00:33:12.209312 systemd[1]: Started sshd@291-78.46.184.112:22-119.1.156.50:13893.service - OpenSSH per-connection server daemon (119.1.156.50:13893). Sep 13 00:33:13.112614 sshd[8430]: Invalid user debian from 119.1.156.50 port 13893 Sep 13 00:33:13.333140 sshd[8430]: Connection closed by invalid user debian 119.1.156.50 port 13893 [preauth] Sep 13 00:33:13.335846 systemd[1]: sshd@291-78.46.184.112:22-119.1.156.50:13893.service: Deactivated successfully. Sep 13 00:33:13.881110 systemd[1]: Started sshd@292-78.46.184.112:22-147.75.109.163:45426.service - OpenSSH per-connection server daemon (147.75.109.163:45426). Sep 13 00:33:14.870384 sshd[8435]: Accepted publickey for core from 147.75.109.163 port 45426 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:14.871929 sshd[8435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:14.879595 systemd-logind[1566]: New session 36 of user core. Sep 13 00:33:14.887622 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 13 00:33:15.671671 sshd[8435]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:15.679766 systemd[1]: sshd@292-78.46.184.112:22-147.75.109.163:45426.service: Deactivated successfully. Sep 13 00:33:15.684654 systemd[1]: session-36.scope: Deactivated successfully. Sep 13 00:33:15.686561 systemd-logind[1566]: Session 36 logged out. Waiting for processes to exit. Sep 13 00:33:15.688616 systemd-logind[1566]: Removed session 36. Sep 13 00:33:20.838822 systemd[1]: Started sshd@293-78.46.184.112:22-147.75.109.163:60862.service - OpenSSH per-connection server daemon (147.75.109.163:60862). Sep 13 00:33:21.811204 sshd[8450]: Accepted publickey for core from 147.75.109.163 port 60862 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:21.815152 sshd[8450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:21.821815 systemd-logind[1566]: New session 37 of user core. Sep 13 00:33:21.828079 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 13 00:33:22.572634 sshd[8450]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:22.577777 systemd[1]: sshd@293-78.46.184.112:22-147.75.109.163:60862.service: Deactivated successfully. Sep 13 00:33:22.581970 systemd-logind[1566]: Session 37 logged out. Waiting for processes to exit. Sep 13 00:33:22.581977 systemd[1]: session-37.scope: Deactivated successfully. Sep 13 00:33:22.584768 systemd-logind[1566]: Removed session 37. Sep 13 00:33:27.743961 systemd[1]: Started sshd@294-78.46.184.112:22-147.75.109.163:60872.service - OpenSSH per-connection server daemon (147.75.109.163:60872). Sep 13 00:33:28.726514 sshd[8531]: Accepted publickey for core from 147.75.109.163 port 60872 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:28.729936 sshd[8531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:28.739569 systemd-logind[1566]: New session 38 of user core. Sep 13 00:33:28.746110 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 13 00:33:29.490733 sshd[8531]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:29.495496 systemd-logind[1566]: Session 38 logged out. Waiting for processes to exit. Sep 13 00:33:29.495869 systemd[1]: sshd@294-78.46.184.112:22-147.75.109.163:60872.service: Deactivated successfully. Sep 13 00:33:29.501852 systemd[1]: session-38.scope: Deactivated successfully. Sep 13 00:33:29.503155 systemd-logind[1566]: Removed session 38. Sep 13 00:33:34.654751 systemd[1]: Started sshd@295-78.46.184.112:22-147.75.109.163:33122.service - OpenSSH per-connection server daemon (147.75.109.163:33122). Sep 13 00:33:35.627519 sshd[8549]: Accepted publickey for core from 147.75.109.163 port 33122 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:35.630205 sshd[8549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:35.634891 systemd-logind[1566]: New session 39 of user core. Sep 13 00:33:35.641042 systemd[1]: Started session-39.scope - Session 39 of User core. Sep 13 00:33:36.373868 sshd[8549]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:36.380429 systemd[1]: sshd@295-78.46.184.112:22-147.75.109.163:33122.service: Deactivated successfully. Sep 13 00:33:36.383757 systemd[1]: session-39.scope: Deactivated successfully. Sep 13 00:33:36.383912 systemd-logind[1566]: Session 39 logged out. Waiting for processes to exit. Sep 13 00:33:36.385969 systemd-logind[1566]: Removed session 39. Sep 13 00:33:41.541767 systemd[1]: Started sshd@296-78.46.184.112:22-147.75.109.163:55962.service - OpenSSH per-connection server daemon (147.75.109.163:55962). Sep 13 00:33:42.522829 sshd[8564]: Accepted publickey for core from 147.75.109.163 port 55962 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:42.525075 sshd[8564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:42.530271 systemd-logind[1566]: New session 40 of user core. Sep 13 00:33:42.536047 systemd[1]: Started session-40.scope - Session 40 of User core. Sep 13 00:33:43.310015 sshd[8564]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:43.318259 systemd[1]: sshd@296-78.46.184.112:22-147.75.109.163:55962.service: Deactivated successfully. Sep 13 00:33:43.326901 systemd[1]: session-40.scope: Deactivated successfully. Sep 13 00:33:43.329127 systemd-logind[1566]: Session 40 logged out. Waiting for processes to exit. Sep 13 00:33:43.338277 systemd-logind[1566]: Removed session 40. Sep 13 00:33:48.477039 systemd[1]: Started sshd@297-78.46.184.112:22-147.75.109.163:55978.service - OpenSSH per-connection server daemon (147.75.109.163:55978). Sep 13 00:33:48.758422 systemd[1]: run-containerd-runc-k8s.io-7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7-runc.uIe2Kn.mount: Deactivated successfully. Sep 13 00:33:49.460656 sshd[8598]: Accepted publickey for core from 147.75.109.163 port 55978 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:49.463903 sshd[8598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:49.470909 systemd-logind[1566]: New session 41 of user core. Sep 13 00:33:49.479183 systemd[1]: Started session-41.scope - Session 41 of User core. Sep 13 00:33:50.234523 sshd[8598]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:50.240151 systemd[1]: sshd@297-78.46.184.112:22-147.75.109.163:55978.service: Deactivated successfully. Sep 13 00:33:50.245305 systemd-logind[1566]: Session 41 logged out. Waiting for processes to exit. Sep 13 00:33:50.245666 systemd[1]: session-41.scope: Deactivated successfully. Sep 13 00:33:50.248175 systemd-logind[1566]: Removed session 41. Sep 13 00:33:55.401887 systemd[1]: Started sshd@298-78.46.184.112:22-147.75.109.163:58478.service - OpenSSH per-connection server daemon (147.75.109.163:58478). Sep 13 00:33:56.376510 sshd[8673]: Accepted publickey for core from 147.75.109.163 port 58478 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:33:56.379220 sshd[8673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:33:56.389392 systemd-logind[1566]: New session 42 of user core. Sep 13 00:33:56.396190 systemd[1]: Started session-42.scope - Session 42 of User core. Sep 13 00:33:57.132078 sshd[8673]: pam_unix(sshd:session): session closed for user core Sep 13 00:33:57.137640 systemd[1]: sshd@298-78.46.184.112:22-147.75.109.163:58478.service: Deactivated successfully. Sep 13 00:33:57.143219 systemd-logind[1566]: Session 42 logged out. Waiting for processes to exit. Sep 13 00:33:57.144258 systemd[1]: session-42.scope: Deactivated successfully. Sep 13 00:33:57.148041 systemd-logind[1566]: Removed session 42. Sep 13 00:34:02.301903 systemd[1]: Started sshd@299-78.46.184.112:22-147.75.109.163:40108.service - OpenSSH per-connection server daemon (147.75.109.163:40108). Sep 13 00:34:03.296192 sshd[8714]: Accepted publickey for core from 147.75.109.163 port 40108 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:03.295690 sshd[8714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:03.308348 systemd-logind[1566]: New session 43 of user core. Sep 13 00:34:03.315303 systemd[1]: Started session-43.scope - Session 43 of User core. Sep 13 00:34:04.082040 sshd[8714]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:04.098182 systemd[1]: sshd@299-78.46.184.112:22-147.75.109.163:40108.service: Deactivated successfully. Sep 13 00:34:04.106428 systemd[1]: session-43.scope: Deactivated successfully. Sep 13 00:34:04.107533 systemd-logind[1566]: Session 43 logged out. Waiting for processes to exit. Sep 13 00:34:04.110178 systemd-logind[1566]: Removed session 43. Sep 13 00:34:04.244000 systemd[1]: Started sshd@300-78.46.184.112:22-147.75.109.163:40118.service - OpenSSH per-connection server daemon (147.75.109.163:40118). Sep 13 00:34:05.217518 sshd[8728]: Accepted publickey for core from 147.75.109.163 port 40118 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:05.220772 sshd[8728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:05.227250 systemd-logind[1566]: New session 44 of user core. Sep 13 00:34:05.237377 systemd[1]: Started session-44.scope - Session 44 of User core. Sep 13 00:34:06.036988 sshd[8728]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:06.042755 systemd-logind[1566]: Session 44 logged out. Waiting for processes to exit. Sep 13 00:34:06.043044 systemd[1]: sshd@300-78.46.184.112:22-147.75.109.163:40118.service: Deactivated successfully. Sep 13 00:34:06.049383 systemd[1]: session-44.scope: Deactivated successfully. Sep 13 00:34:06.051975 systemd-logind[1566]: Removed session 44. Sep 13 00:34:06.203393 systemd[1]: Started sshd@301-78.46.184.112:22-147.75.109.163:40120.service - OpenSSH per-connection server daemon (147.75.109.163:40120). Sep 13 00:34:07.197022 sshd[8740]: Accepted publickey for core from 147.75.109.163 port 40120 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:07.199647 sshd[8740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:07.209530 systemd-logind[1566]: New session 45 of user core. Sep 13 00:34:07.218993 systemd[1]: Started session-45.scope - Session 45 of User core. Sep 13 00:34:07.964950 sshd[8740]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:07.970797 systemd-logind[1566]: Session 45 logged out. Waiting for processes to exit. Sep 13 00:34:07.971904 systemd[1]: sshd@301-78.46.184.112:22-147.75.109.163:40120.service: Deactivated successfully. Sep 13 00:34:07.979187 systemd[1]: session-45.scope: Deactivated successfully. Sep 13 00:34:07.980658 systemd-logind[1566]: Removed session 45. Sep 13 00:34:13.134864 systemd[1]: Started sshd@302-78.46.184.112:22-147.75.109.163:54004.service - OpenSSH per-connection server daemon (147.75.109.163:54004). Sep 13 00:34:14.137149 sshd[8755]: Accepted publickey for core from 147.75.109.163 port 54004 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:14.141064 sshd[8755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:14.163691 systemd-logind[1566]: New session 46 of user core. Sep 13 00:34:14.172963 systemd[1]: Started session-46.scope - Session 46 of User core. Sep 13 00:34:14.956713 sshd[8755]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:14.967001 systemd[1]: sshd@302-78.46.184.112:22-147.75.109.163:54004.service: Deactivated successfully. Sep 13 00:34:14.972404 systemd[1]: session-46.scope: Deactivated successfully. Sep 13 00:34:14.977563 systemd-logind[1566]: Session 46 logged out. Waiting for processes to exit. Sep 13 00:34:14.979501 systemd-logind[1566]: Removed session 46. Sep 13 00:34:20.128386 systemd[1]: Started sshd@303-78.46.184.112:22-147.75.109.163:54006.service - OpenSSH per-connection server daemon (147.75.109.163:54006). Sep 13 00:34:21.111842 sshd[8769]: Accepted publickey for core from 147.75.109.163 port 54006 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:21.114616 sshd[8769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:21.124713 systemd-logind[1566]: New session 47 of user core. Sep 13 00:34:21.132340 systemd[1]: Started session-47.scope - Session 47 of User core. Sep 13 00:34:21.885500 sshd[8769]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:21.890693 systemd[1]: sshd@303-78.46.184.112:22-147.75.109.163:54006.service: Deactivated successfully. Sep 13 00:34:21.897136 systemd-logind[1566]: Session 47 logged out. Waiting for processes to exit. Sep 13 00:34:21.897916 systemd[1]: session-47.scope: Deactivated successfully. Sep 13 00:34:21.900751 systemd-logind[1566]: Removed session 47. Sep 13 00:34:27.049843 systemd[1]: Started sshd@304-78.46.184.112:22-147.75.109.163:57690.service - OpenSSH per-connection server daemon (147.75.109.163:57690). Sep 13 00:34:28.049434 sshd[8866]: Accepted publickey for core from 147.75.109.163 port 57690 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:28.052640 sshd[8866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:28.062088 systemd-logind[1566]: New session 48 of user core. Sep 13 00:34:28.066565 systemd[1]: Started session-48.scope - Session 48 of User core. Sep 13 00:34:28.825069 sshd[8866]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:28.831139 systemd[1]: sshd@304-78.46.184.112:22-147.75.109.163:57690.service: Deactivated successfully. Sep 13 00:34:28.831563 systemd-logind[1566]: Session 48 logged out. Waiting for processes to exit. Sep 13 00:34:28.839393 systemd[1]: session-48.scope: Deactivated successfully. Sep 13 00:34:28.841191 systemd-logind[1566]: Removed session 48. Sep 13 00:34:33.993889 systemd[1]: Started sshd@305-78.46.184.112:22-147.75.109.163:35900.service - OpenSSH per-connection server daemon (147.75.109.163:35900). Sep 13 00:34:34.983436 sshd[8882]: Accepted publickey for core from 147.75.109.163 port 35900 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:34.989886 sshd[8882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:35.000601 systemd-logind[1566]: New session 49 of user core. Sep 13 00:34:35.006981 systemd[1]: Started session-49.scope - Session 49 of User core. Sep 13 00:34:35.765539 sshd[8882]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:35.773507 systemd[1]: sshd@305-78.46.184.112:22-147.75.109.163:35900.service: Deactivated successfully. Sep 13 00:34:35.774628 systemd-logind[1566]: Session 49 logged out. Waiting for processes to exit. Sep 13 00:34:35.781071 systemd[1]: session-49.scope: Deactivated successfully. Sep 13 00:34:35.787441 systemd-logind[1566]: Removed session 49. Sep 13 00:34:40.931868 systemd[1]: Started sshd@306-78.46.184.112:22-147.75.109.163:57730.service - OpenSSH per-connection server daemon (147.75.109.163:57730). Sep 13 00:34:41.923206 sshd[8897]: Accepted publickey for core from 147.75.109.163 port 57730 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:41.927621 sshd[8897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:41.938707 systemd-logind[1566]: New session 50 of user core. Sep 13 00:34:41.943965 systemd[1]: Started session-50.scope - Session 50 of User core. Sep 13 00:34:42.715285 sshd[8897]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:42.719442 systemd-logind[1566]: Session 50 logged out. Waiting for processes to exit. Sep 13 00:34:42.721925 systemd[1]: sshd@306-78.46.184.112:22-147.75.109.163:57730.service: Deactivated successfully. Sep 13 00:34:42.725883 systemd[1]: session-50.scope: Deactivated successfully. Sep 13 00:34:42.729076 systemd-logind[1566]: Removed session 50. Sep 13 00:34:43.250936 systemd[1]: run-containerd-runc-k8s.io-6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f-runc.wx2kvV.mount: Deactivated successfully. Sep 13 00:34:47.885566 systemd[1]: Started sshd@307-78.46.184.112:22-147.75.109.163:57736.service - OpenSSH per-connection server daemon (147.75.109.163:57736). Sep 13 00:34:48.906512 sshd[8930]: Accepted publickey for core from 147.75.109.163 port 57736 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:48.913025 sshd[8930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:48.928179 systemd-logind[1566]: New session 51 of user core. Sep 13 00:34:48.932989 systemd[1]: Started session-51.scope - Session 51 of User core. Sep 13 00:34:49.673757 sshd[8930]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:49.679804 systemd[1]: sshd@307-78.46.184.112:22-147.75.109.163:57736.service: Deactivated successfully. Sep 13 00:34:49.684168 systemd[1]: session-51.scope: Deactivated successfully. Sep 13 00:34:49.686361 systemd-logind[1566]: Session 51 logged out. Waiting for processes to exit. Sep 13 00:34:49.687761 systemd-logind[1566]: Removed session 51. Sep 13 00:34:54.840885 systemd[1]: Started sshd@308-78.46.184.112:22-147.75.109.163:60514.service - OpenSSH per-connection server daemon (147.75.109.163:60514). Sep 13 00:34:55.823173 sshd[9001]: Accepted publickey for core from 147.75.109.163 port 60514 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:34:55.825401 sshd[9001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:34:55.835885 systemd-logind[1566]: New session 52 of user core. Sep 13 00:34:55.840078 systemd[1]: Started session-52.scope - Session 52 of User core. Sep 13 00:34:56.581867 sshd[9001]: pam_unix(sshd:session): session closed for user core Sep 13 00:34:56.587924 systemd[1]: sshd@308-78.46.184.112:22-147.75.109.163:60514.service: Deactivated successfully. Sep 13 00:34:56.594083 systemd[1]: session-52.scope: Deactivated successfully. Sep 13 00:34:56.597252 systemd-logind[1566]: Session 52 logged out. Waiting for processes to exit. Sep 13 00:34:56.599124 systemd-logind[1566]: Removed session 52. Sep 13 00:35:01.749417 systemd[1]: Started sshd@309-78.46.184.112:22-147.75.109.163:37834.service - OpenSSH per-connection server daemon (147.75.109.163:37834). Sep 13 00:35:02.725817 sshd[9037]: Accepted publickey for core from 147.75.109.163 port 37834 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:02.728652 sshd[9037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:02.735002 systemd-logind[1566]: New session 53 of user core. Sep 13 00:35:02.742244 systemd[1]: Started session-53.scope - Session 53 of User core. Sep 13 00:35:03.476263 sshd[9037]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:03.480636 systemd[1]: sshd@309-78.46.184.112:22-147.75.109.163:37834.service: Deactivated successfully. Sep 13 00:35:03.487901 systemd-logind[1566]: Session 53 logged out. Waiting for processes to exit. Sep 13 00:35:03.489067 systemd[1]: session-53.scope: Deactivated successfully. Sep 13 00:35:03.492107 systemd-logind[1566]: Removed session 53. Sep 13 00:35:08.639778 systemd[1]: Started sshd@310-78.46.184.112:22-147.75.109.163:37850.service - OpenSSH per-connection server daemon (147.75.109.163:37850). Sep 13 00:35:09.611620 sshd[9051]: Accepted publickey for core from 147.75.109.163 port 37850 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:09.613766 sshd[9051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:09.620225 systemd-logind[1566]: New session 54 of user core. Sep 13 00:35:09.626187 systemd[1]: Started session-54.scope - Session 54 of User core. Sep 13 00:35:10.363834 sshd[9051]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:10.370136 systemd[1]: sshd@310-78.46.184.112:22-147.75.109.163:37850.service: Deactivated successfully. Sep 13 00:35:10.374798 systemd[1]: session-54.scope: Deactivated successfully. Sep 13 00:35:10.376098 systemd-logind[1566]: Session 54 logged out. Waiting for processes to exit. Sep 13 00:35:10.377324 systemd-logind[1566]: Removed session 54. Sep 13 00:35:15.535981 systemd[1]: Started sshd@311-78.46.184.112:22-147.75.109.163:50424.service - OpenSSH per-connection server daemon (147.75.109.163:50424). Sep 13 00:35:16.530332 sshd[9065]: Accepted publickey for core from 147.75.109.163 port 50424 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:16.532285 sshd[9065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:16.537524 systemd-logind[1566]: New session 55 of user core. Sep 13 00:35:16.542856 systemd[1]: Started session-55.scope - Session 55 of User core. Sep 13 00:35:17.292235 sshd[9065]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:17.300174 systemd[1]: sshd@311-78.46.184.112:22-147.75.109.163:50424.service: Deactivated successfully. Sep 13 00:35:17.306071 systemd[1]: session-55.scope: Deactivated successfully. Sep 13 00:35:17.307703 systemd-logind[1566]: Session 55 logged out. Waiting for processes to exit. Sep 13 00:35:17.309577 systemd-logind[1566]: Removed session 55. Sep 13 00:35:22.460872 systemd[1]: Started sshd@312-78.46.184.112:22-147.75.109.163:34826.service - OpenSSH per-connection server daemon (147.75.109.163:34826). Sep 13 00:35:23.440200 sshd[9081]: Accepted publickey for core from 147.75.109.163 port 34826 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:23.443704 sshd[9081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:23.450120 systemd-logind[1566]: New session 56 of user core. Sep 13 00:35:23.454949 systemd[1]: Started session-56.scope - Session 56 of User core. Sep 13 00:35:24.200137 sshd[9081]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:24.205149 systemd[1]: sshd@312-78.46.184.112:22-147.75.109.163:34826.service: Deactivated successfully. Sep 13 00:35:24.211725 systemd[1]: session-56.scope: Deactivated successfully. Sep 13 00:35:24.213420 systemd-logind[1566]: Session 56 logged out. Waiting for processes to exit. Sep 13 00:35:24.215335 systemd-logind[1566]: Removed session 56. Sep 13 00:35:25.770719 systemd[1]: run-containerd-runc-k8s.io-b004ea11e98ab831035ac43b8396cb21b60d80873e367e2892dbe910902cb68f-runc.RyJjfq.mount: Deactivated successfully. Sep 13 00:35:29.371908 systemd[1]: Started sshd@313-78.46.184.112:22-147.75.109.163:34832.service - OpenSSH per-connection server daemon (147.75.109.163:34832). Sep 13 00:35:30.368099 sshd[9157]: Accepted publickey for core from 147.75.109.163 port 34832 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:30.370216 sshd[9157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:30.376989 systemd-logind[1566]: New session 57 of user core. Sep 13 00:35:30.384587 systemd[1]: Started session-57.scope - Session 57 of User core. Sep 13 00:35:31.130769 sshd[9157]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:31.137188 systemd[1]: sshd@313-78.46.184.112:22-147.75.109.163:34832.service: Deactivated successfully. Sep 13 00:35:31.142856 systemd[1]: session-57.scope: Deactivated successfully. Sep 13 00:35:31.143979 systemd-logind[1566]: Session 57 logged out. Waiting for processes to exit. Sep 13 00:35:31.145265 systemd-logind[1566]: Removed session 57. Sep 13 00:35:36.291790 systemd[1]: Started sshd@314-78.46.184.112:22-147.75.109.163:58798.service - OpenSSH per-connection server daemon (147.75.109.163:58798). Sep 13 00:35:37.266212 sshd[9171]: Accepted publickey for core from 147.75.109.163 port 58798 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:37.268797 sshd[9171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:37.274557 systemd-logind[1566]: New session 58 of user core. Sep 13 00:35:37.278881 systemd[1]: Started session-58.scope - Session 58 of User core. Sep 13 00:35:38.020783 sshd[9171]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:38.026238 systemd[1]: sshd@314-78.46.184.112:22-147.75.109.163:58798.service: Deactivated successfully. Sep 13 00:35:38.030903 systemd[1]: session-58.scope: Deactivated successfully. Sep 13 00:35:38.031831 systemd-logind[1566]: Session 58 logged out. Waiting for processes to exit. Sep 13 00:35:38.033118 systemd-logind[1566]: Removed session 58. Sep 13 00:35:43.197157 systemd[1]: Started sshd@315-78.46.184.112:22-147.75.109.163:55026.service - OpenSSH per-connection server daemon (147.75.109.163:55026). Sep 13 00:35:43.249040 systemd[1]: run-containerd-runc-k8s.io-6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f-runc.Y5Kc1p.mount: Deactivated successfully. Sep 13 00:35:44.236605 sshd[9185]: Accepted publickey for core from 147.75.109.163 port 55026 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:44.239155 sshd[9185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:44.245639 systemd-logind[1566]: New session 59 of user core. Sep 13 00:35:44.249775 systemd[1]: Started session-59.scope - Session 59 of User core. Sep 13 00:35:45.038707 sshd[9185]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:45.044097 systemd[1]: sshd@315-78.46.184.112:22-147.75.109.163:55026.service: Deactivated successfully. Sep 13 00:35:45.054500 systemd[1]: session-59.scope: Deactivated successfully. Sep 13 00:35:45.056532 systemd-logind[1566]: Session 59 logged out. Waiting for processes to exit. Sep 13 00:35:45.058536 systemd-logind[1566]: Removed session 59. Sep 13 00:35:48.750283 systemd[1]: run-containerd-runc-k8s.io-7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7-runc.0RQPdl.mount: Deactivated successfully. Sep 13 00:35:50.204303 systemd[1]: Started sshd@316-78.46.184.112:22-147.75.109.163:48722.service - OpenSSH per-connection server daemon (147.75.109.163:48722). Sep 13 00:35:51.210128 sshd[9238]: Accepted publickey for core from 147.75.109.163 port 48722 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:51.214798 sshd[9238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:51.228524 systemd-logind[1566]: New session 60 of user core. Sep 13 00:35:51.232830 systemd[1]: Started session-60.scope - Session 60 of User core. Sep 13 00:35:52.003845 sshd[9238]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:52.010825 systemd[1]: sshd@316-78.46.184.112:22-147.75.109.163:48722.service: Deactivated successfully. Sep 13 00:35:52.016106 systemd[1]: session-60.scope: Deactivated successfully. Sep 13 00:35:52.017890 systemd-logind[1566]: Session 60 logged out. Waiting for processes to exit. Sep 13 00:35:52.019411 systemd-logind[1566]: Removed session 60. Sep 13 00:35:53.839801 systemd[1]: run-containerd-runc-k8s.io-6a43b89e61f2d9daf37a711ba438e8f8faf608837086e2c5f304d32fdf52126f-runc.22hvkA.mount: Deactivated successfully. Sep 13 00:35:56.493021 systemd[1]: Started sshd@317-78.46.184.112:22-91.224.92.32:52390.service - OpenSSH per-connection server daemon (91.224.92.32:52390). Sep 13 00:35:56.722870 sshd[9334]: Received disconnect from 91.224.92.32 port 52390:11: [preauth] Sep 13 00:35:56.722870 sshd[9334]: Disconnected from 91.224.92.32 port 52390 [preauth] Sep 13 00:35:56.725431 systemd[1]: sshd@317-78.46.184.112:22-91.224.92.32:52390.service: Deactivated successfully. Sep 13 00:35:57.172922 systemd[1]: Started sshd@318-78.46.184.112:22-147.75.109.163:48734.service - OpenSSH per-connection server daemon (147.75.109.163:48734). Sep 13 00:35:58.172614 sshd[9339]: Accepted publickey for core from 147.75.109.163 port 48734 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:35:58.175030 sshd[9339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:35:58.182054 systemd-logind[1566]: New session 61 of user core. Sep 13 00:35:58.185788 systemd[1]: Started session-61.scope - Session 61 of User core. Sep 13 00:35:58.949070 sshd[9339]: pam_unix(sshd:session): session closed for user core Sep 13 00:35:58.953312 systemd-logind[1566]: Session 61 logged out. Waiting for processes to exit. Sep 13 00:35:58.954353 systemd[1]: sshd@318-78.46.184.112:22-147.75.109.163:48734.service: Deactivated successfully. Sep 13 00:35:58.960269 systemd[1]: session-61.scope: Deactivated successfully. Sep 13 00:35:58.961109 systemd-logind[1566]: Removed session 61. Sep 13 00:36:04.119255 systemd[1]: Started sshd@319-78.46.184.112:22-147.75.109.163:57106.service - OpenSSH per-connection server daemon (147.75.109.163:57106). Sep 13 00:36:05.111918 sshd[9356]: Accepted publickey for core from 147.75.109.163 port 57106 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:36:05.115210 sshd[9356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:36:05.123374 systemd-logind[1566]: New session 62 of user core. Sep 13 00:36:05.128025 systemd[1]: Started session-62.scope - Session 62 of User core. Sep 13 00:36:05.872056 sshd[9356]: pam_unix(sshd:session): session closed for user core Sep 13 00:36:05.877776 systemd-logind[1566]: Session 62 logged out. Waiting for processes to exit. Sep 13 00:36:05.879182 systemd[1]: sshd@319-78.46.184.112:22-147.75.109.163:57106.service: Deactivated successfully. Sep 13 00:36:05.884503 systemd[1]: session-62.scope: Deactivated successfully. Sep 13 00:36:05.886100 systemd-logind[1566]: Removed session 62. Sep 13 00:36:11.041803 systemd[1]: Started sshd@320-78.46.184.112:22-147.75.109.163:49516.service - OpenSSH per-connection server daemon (147.75.109.163:49516). Sep 13 00:36:12.040491 sshd[9371]: Accepted publickey for core from 147.75.109.163 port 49516 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:36:12.043985 sshd[9371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:36:12.050532 systemd-logind[1566]: New session 63 of user core. Sep 13 00:36:12.055851 systemd[1]: Started session-63.scope - Session 63 of User core. Sep 13 00:36:12.825233 sshd[9371]: pam_unix(sshd:session): session closed for user core Sep 13 00:36:12.832358 systemd[1]: sshd@320-78.46.184.112:22-147.75.109.163:49516.service: Deactivated successfully. Sep 13 00:36:12.837134 systemd[1]: session-63.scope: Deactivated successfully. Sep 13 00:36:12.838675 systemd-logind[1566]: Session 63 logged out. Waiting for processes to exit. Sep 13 00:36:12.840572 systemd-logind[1566]: Removed session 63. Sep 13 00:36:17.995266 systemd[1]: Started sshd@321-78.46.184.112:22-147.75.109.163:49520.service - OpenSSH per-connection server daemon (147.75.109.163:49520). Sep 13 00:36:18.982336 sshd[9385]: Accepted publickey for core from 147.75.109.163 port 49520 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:36:18.984238 sshd[9385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:36:18.989580 systemd-logind[1566]: New session 64 of user core. Sep 13 00:36:18.996928 systemd[1]: Started session-64.scope - Session 64 of User core. Sep 13 00:36:19.737577 sshd[9385]: pam_unix(sshd:session): session closed for user core Sep 13 00:36:19.745333 systemd-logind[1566]: Session 64 logged out. Waiting for processes to exit. Sep 13 00:36:19.746857 systemd[1]: sshd@321-78.46.184.112:22-147.75.109.163:49520.service: Deactivated successfully. Sep 13 00:36:19.760656 systemd[1]: session-64.scope: Deactivated successfully. Sep 13 00:36:19.764627 systemd-logind[1566]: Removed session 64. Sep 13 00:36:24.904781 systemd[1]: Started sshd@322-78.46.184.112:22-147.75.109.163:38388.service - OpenSSH per-connection server daemon (147.75.109.163:38388). Sep 13 00:36:25.771419 systemd[1]: run-containerd-runc-k8s.io-b004ea11e98ab831035ac43b8396cb21b60d80873e367e2892dbe910902cb68f-runc.ZtjYjX.mount: Deactivated successfully. Sep 13 00:36:25.912344 sshd[9448]: Accepted publickey for core from 147.75.109.163 port 38388 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:36:25.914979 sshd[9448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:36:25.921191 systemd-logind[1566]: New session 65 of user core. Sep 13 00:36:25.929926 systemd[1]: Started session-65.scope - Session 65 of User core. Sep 13 00:36:26.682004 sshd[9448]: pam_unix(sshd:session): session closed for user core Sep 13 00:36:26.686665 systemd-logind[1566]: Session 65 logged out. Waiting for processes to exit. Sep 13 00:36:26.687023 systemd[1]: sshd@322-78.46.184.112:22-147.75.109.163:38388.service: Deactivated successfully. Sep 13 00:36:26.692611 systemd[1]: session-65.scope: Deactivated successfully. Sep 13 00:36:26.696171 systemd-logind[1566]: Removed session 65. Sep 13 00:36:31.842806 systemd[1]: Started sshd@323-78.46.184.112:22-147.75.109.163:34680.service - OpenSSH per-connection server daemon (147.75.109.163:34680). Sep 13 00:36:32.829603 sshd[9487]: Accepted publickey for core from 147.75.109.163 port 34680 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:36:32.832922 sshd[9487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:36:32.840064 systemd-logind[1566]: New session 66 of user core. Sep 13 00:36:32.841908 systemd[1]: Started session-66.scope - Session 66 of User core. Sep 13 00:36:33.584837 sshd[9487]: pam_unix(sshd:session): session closed for user core Sep 13 00:36:33.590323 systemd[1]: sshd@323-78.46.184.112:22-147.75.109.163:34680.service: Deactivated successfully. Sep 13 00:36:33.597072 systemd[1]: session-66.scope: Deactivated successfully. Sep 13 00:36:33.598278 systemd-logind[1566]: Session 66 logged out. Waiting for processes to exit. Sep 13 00:36:33.601124 systemd-logind[1566]: Removed session 66. Sep 13 00:36:38.750957 systemd[1]: Started sshd@324-78.46.184.112:22-147.75.109.163:34692.service - OpenSSH per-connection server daemon (147.75.109.163:34692). Sep 13 00:36:39.721551 sshd[9500]: Accepted publickey for core from 147.75.109.163 port 34692 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:36:39.724859 sshd[9500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:36:39.738109 systemd-logind[1566]: New session 67 of user core. Sep 13 00:36:39.748275 systemd[1]: Started session-67.scope - Session 67 of User core. Sep 13 00:36:40.477642 sshd[9500]: pam_unix(sshd:session): session closed for user core Sep 13 00:36:40.482990 systemd[1]: sshd@324-78.46.184.112:22-147.75.109.163:34692.service: Deactivated successfully. Sep 13 00:36:40.490114 systemd-logind[1566]: Session 67 logged out. Waiting for processes to exit. Sep 13 00:36:40.491387 systemd[1]: session-67.scope: Deactivated successfully. Sep 13 00:36:40.494811 systemd-logind[1566]: Removed session 67. Sep 13 00:36:45.648802 systemd[1]: Started sshd@325-78.46.184.112:22-147.75.109.163:43958.service - OpenSSH per-connection server daemon (147.75.109.163:43958). Sep 13 00:36:46.630811 sshd[9533]: Accepted publickey for core from 147.75.109.163 port 43958 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:36:46.633319 sshd[9533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:36:46.642866 systemd-logind[1566]: New session 68 of user core. Sep 13 00:36:46.646194 systemd[1]: Started session-68.scope - Session 68 of User core. Sep 13 00:36:47.405182 sshd[9533]: pam_unix(sshd:session): session closed for user core Sep 13 00:36:47.413114 systemd-logind[1566]: Session 68 logged out. Waiting for processes to exit. Sep 13 00:36:47.414119 systemd[1]: sshd@325-78.46.184.112:22-147.75.109.163:43958.service: Deactivated successfully. Sep 13 00:36:47.419441 systemd[1]: session-68.scope: Deactivated successfully. Sep 13 00:36:47.423158 systemd-logind[1566]: Removed session 68. Sep 13 00:36:52.576887 systemd[1]: Started sshd@326-78.46.184.112:22-147.75.109.163:52232.service - OpenSSH per-connection server daemon (147.75.109.163:52232). Sep 13 00:36:52.625962 update_engine[1570]: I20250913 00:36:52.625887 1570 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:36:52.625962 update_engine[1570]: I20250913 00:36:52.625950 1570 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:36:52.626569 update_engine[1570]: I20250913 00:36:52.626236 1570 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:36:52.626779 update_engine[1570]: I20250913 00:36:52.626737 1570 omaha_request_params.cc:62] Current group set to lts Sep 13 00:36:52.626993 update_engine[1570]: I20250913 00:36:52.626853 1570 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:36:52.626993 update_engine[1570]: I20250913 00:36:52.626866 1570 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:36:52.626993 update_engine[1570]: I20250913 00:36:52.626888 1570 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:36:52.635968 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:36:52.637137 update_engine[1570]: I20250913 00:36:52.636334 1570 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:36:52.637137 update_engine[1570]: I20250913 00:36:52.636521 1570 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:36:52.637137 update_engine[1570]: I20250913 00:36:52.636533 1570 omaha_request_action.cc:272] Request: Sep 13 00:36:52.637137 update_engine[1570]: Sep 13 00:36:52.637137 update_engine[1570]: Sep 13 00:36:52.637137 update_engine[1570]: Sep 13 00:36:52.637137 update_engine[1570]: Sep 13 00:36:52.637137 update_engine[1570]: Sep 13 00:36:52.637137 update_engine[1570]: Sep 13 00:36:52.637137 update_engine[1570]: Sep 13 00:36:52.637137 update_engine[1570]: Sep 13 00:36:52.637137 update_engine[1570]: I20250913 00:36:52.636540 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:36:52.641514 update_engine[1570]: I20250913 00:36:52.640949 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:36:52.641514 update_engine[1570]: I20250913 00:36:52.641405 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:36:52.642870 update_engine[1570]: E20250913 00:36:52.642745 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:36:52.642870 update_engine[1570]: I20250913 00:36:52.642835 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:36:53.573854 sshd[9567]: Accepted publickey for core from 147.75.109.163 port 52232 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:36:53.579988 sshd[9567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:36:53.594567 systemd-logind[1566]: New session 69 of user core. Sep 13 00:36:53.598942 systemd[1]: Started session-69.scope - Session 69 of User core. Sep 13 00:36:53.869183 systemd[1]: run-containerd-runc-k8s.io-7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7-runc.exu1ao.mount: Deactivated successfully. Sep 13 00:36:54.369782 sshd[9567]: pam_unix(sshd:session): session closed for user core Sep 13 00:36:54.384287 systemd[1]: sshd@326-78.46.184.112:22-147.75.109.163:52232.service: Deactivated successfully. Sep 13 00:36:54.386558 systemd-logind[1566]: Session 69 logged out. Waiting for processes to exit. Sep 13 00:36:54.397240 systemd[1]: session-69.scope: Deactivated successfully. Sep 13 00:36:54.404549 systemd-logind[1566]: Removed session 69. Sep 13 00:36:59.535851 systemd[1]: Started sshd@327-78.46.184.112:22-147.75.109.163:52242.service - OpenSSH per-connection server daemon (147.75.109.163:52242). Sep 13 00:37:00.519533 sshd[9645]: Accepted publickey for core from 147.75.109.163 port 52242 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:00.521894 sshd[9645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:00.528079 systemd-logind[1566]: New session 70 of user core. Sep 13 00:37:00.536960 systemd[1]: Started session-70.scope - Session 70 of User core. Sep 13 00:37:01.291844 sshd[9645]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:01.299743 systemd-logind[1566]: Session 70 logged out. Waiting for processes to exit. Sep 13 00:37:01.299964 systemd[1]: sshd@327-78.46.184.112:22-147.75.109.163:52242.service: Deactivated successfully. Sep 13 00:37:01.305372 systemd[1]: session-70.scope: Deactivated successfully. Sep 13 00:37:01.307168 systemd-logind[1566]: Removed session 70. Sep 13 00:37:02.622151 update_engine[1570]: I20250913 00:37:02.621997 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:37:02.622787 update_engine[1570]: I20250913 00:37:02.622268 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:37:02.622787 update_engine[1570]: I20250913 00:37:02.622551 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:37:02.623509 update_engine[1570]: E20250913 00:37:02.623310 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:37:02.623509 update_engine[1570]: I20250913 00:37:02.623414 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:37:06.464202 systemd[1]: Started sshd@328-78.46.184.112:22-147.75.109.163:38554.service - OpenSSH per-connection server daemon (147.75.109.163:38554). Sep 13 00:37:07.459540 sshd[9659]: Accepted publickey for core from 147.75.109.163 port 38554 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:07.462996 sshd[9659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:07.475616 systemd-logind[1566]: New session 71 of user core. Sep 13 00:37:07.480127 systemd[1]: Started session-71.scope - Session 71 of User core. Sep 13 00:37:08.237580 sshd[9659]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:08.244804 systemd-logind[1566]: Session 71 logged out. Waiting for processes to exit. Sep 13 00:37:08.245810 systemd[1]: sshd@328-78.46.184.112:22-147.75.109.163:38554.service: Deactivated successfully. Sep 13 00:37:08.251137 systemd[1]: session-71.scope: Deactivated successfully. Sep 13 00:37:08.252730 systemd-logind[1566]: Removed session 71. Sep 13 00:37:12.626121 update_engine[1570]: I20250913 00:37:12.624894 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:37:12.626121 update_engine[1570]: I20250913 00:37:12.625615 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:37:12.626121 update_engine[1570]: I20250913 00:37:12.626009 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:37:12.626984 update_engine[1570]: E20250913 00:37:12.626907 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:37:12.627170 update_engine[1570]: I20250913 00:37:12.627117 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:37:13.406724 systemd[1]: Started sshd@329-78.46.184.112:22-147.75.109.163:52864.service - OpenSSH per-connection server daemon (147.75.109.163:52864). Sep 13 00:37:14.379485 sshd[9673]: Accepted publickey for core from 147.75.109.163 port 52864 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:14.383262 sshd[9673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:14.389801 systemd-logind[1566]: New session 72 of user core. Sep 13 00:37:14.397818 systemd[1]: Started session-72.scope - Session 72 of User core. Sep 13 00:37:15.145093 sshd[9673]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:15.149584 systemd[1]: sshd@329-78.46.184.112:22-147.75.109.163:52864.service: Deactivated successfully. Sep 13 00:37:15.157060 systemd-logind[1566]: Session 72 logged out. Waiting for processes to exit. Sep 13 00:37:15.157753 systemd[1]: session-72.scope: Deactivated successfully. Sep 13 00:37:15.161099 systemd-logind[1566]: Removed session 72. Sep 13 00:37:20.315955 systemd[1]: Started sshd@330-78.46.184.112:22-147.75.109.163:49988.service - OpenSSH per-connection server daemon (147.75.109.163:49988). Sep 13 00:37:21.315700 sshd[9687]: Accepted publickey for core from 147.75.109.163 port 49988 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:21.318623 sshd[9687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:21.326281 systemd-logind[1566]: New session 73 of user core. Sep 13 00:37:21.330976 systemd[1]: Started session-73.scope - Session 73 of User core. Sep 13 00:37:22.086974 sshd[9687]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:22.092395 systemd[1]: sshd@330-78.46.184.112:22-147.75.109.163:49988.service: Deactivated successfully. Sep 13 00:37:22.100238 systemd[1]: session-73.scope: Deactivated successfully. Sep 13 00:37:22.102815 systemd-logind[1566]: Session 73 logged out. Waiting for processes to exit. Sep 13 00:37:22.104927 systemd-logind[1566]: Removed session 73. Sep 13 00:37:22.622300 update_engine[1570]: I20250913 00:37:22.621505 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:37:22.622300 update_engine[1570]: I20250913 00:37:22.621901 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:37:22.622300 update_engine[1570]: I20250913 00:37:22.622223 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:37:22.623658 update_engine[1570]: E20250913 00:37:22.623577 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:37:22.623868 update_engine[1570]: I20250913 00:37:22.623677 1570 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:37:22.623868 update_engine[1570]: I20250913 00:37:22.623694 1570 omaha_request_action.cc:617] Omaha request response: Sep 13 00:37:22.623868 update_engine[1570]: E20250913 00:37:22.623800 1570 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:37:22.623868 update_engine[1570]: I20250913 00:37:22.623826 1570 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:37:22.623868 update_engine[1570]: I20250913 00:37:22.623835 1570 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:37:22.623868 update_engine[1570]: I20250913 00:37:22.623843 1570 update_attempter.cc:306] Processing Done. Sep 13 00:37:22.623868 update_engine[1570]: E20250913 00:37:22.623862 1570 update_attempter.cc:619] Update failed. Sep 13 00:37:22.623868 update_engine[1570]: I20250913 00:37:22.623872 1570 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:37:22.624251 update_engine[1570]: I20250913 00:37:22.623881 1570 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:37:22.624251 update_engine[1570]: I20250913 00:37:22.623890 1570 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:37:22.624347 update_engine[1570]: I20250913 00:37:22.624218 1570 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:37:22.624347 update_engine[1570]: I20250913 00:37:22.624296 1570 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:37:22.624347 update_engine[1570]: I20250913 00:37:22.624307 1570 omaha_request_action.cc:272] Request: Sep 13 00:37:22.624347 update_engine[1570]: Sep 13 00:37:22.624347 update_engine[1570]: Sep 13 00:37:22.624347 update_engine[1570]: Sep 13 00:37:22.624347 update_engine[1570]: Sep 13 00:37:22.624347 update_engine[1570]: Sep 13 00:37:22.624347 update_engine[1570]: Sep 13 00:37:22.624347 update_engine[1570]: I20250913 00:37:22.624336 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:37:22.624843 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:37:22.625797 update_engine[1570]: I20250913 00:37:22.625023 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:37:22.625797 update_engine[1570]: I20250913 00:37:22.625286 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:37:22.626182 update_engine[1570]: E20250913 00:37:22.626114 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:37:22.626274 update_engine[1570]: I20250913 00:37:22.626240 1570 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:37:22.626274 update_engine[1570]: I20250913 00:37:22.626266 1570 omaha_request_action.cc:617] Omaha request response: Sep 13 00:37:22.626412 update_engine[1570]: I20250913 00:37:22.626278 1570 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:37:22.626412 update_engine[1570]: I20250913 00:37:22.626287 1570 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:37:22.626412 update_engine[1570]: I20250913 00:37:22.626295 1570 update_attempter.cc:306] Processing Done. Sep 13 00:37:22.626412 update_engine[1570]: I20250913 00:37:22.626305 1570 update_attempter.cc:310] Error event sent. Sep 13 00:37:22.626412 update_engine[1570]: I20250913 00:37:22.626355 1570 update_check_scheduler.cc:74] Next update check in 46m6s Sep 13 00:37:22.626748 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:37:27.257518 systemd[1]: Started sshd@331-78.46.184.112:22-147.75.109.163:49996.service - OpenSSH per-connection server daemon (147.75.109.163:49996). Sep 13 00:37:28.249699 sshd[9777]: Accepted publickey for core from 147.75.109.163 port 49996 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:28.252109 sshd[9777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:28.260890 systemd-logind[1566]: New session 74 of user core. Sep 13 00:37:28.270960 systemd[1]: Started session-74.scope - Session 74 of User core. Sep 13 00:37:29.019124 sshd[9777]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:29.024358 systemd[1]: sshd@331-78.46.184.112:22-147.75.109.163:49996.service: Deactivated successfully. Sep 13 00:37:29.031242 systemd[1]: session-74.scope: Deactivated successfully. Sep 13 00:37:29.031521 systemd-logind[1566]: Session 74 logged out. Waiting for processes to exit. Sep 13 00:37:29.033841 systemd-logind[1566]: Removed session 74. Sep 13 00:37:34.184904 systemd[1]: Started sshd@332-78.46.184.112:22-147.75.109.163:59138.service - OpenSSH per-connection server daemon (147.75.109.163:59138). Sep 13 00:37:35.167720 sshd[9799]: Accepted publickey for core from 147.75.109.163 port 59138 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:35.170121 sshd[9799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:35.176244 systemd-logind[1566]: New session 75 of user core. Sep 13 00:37:35.180318 systemd[1]: Started session-75.scope - Session 75 of User core. Sep 13 00:37:35.971997 sshd[9799]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:35.978532 systemd[1]: sshd@332-78.46.184.112:22-147.75.109.163:59138.service: Deactivated successfully. Sep 13 00:37:35.986852 systemd[1]: session-75.scope: Deactivated successfully. Sep 13 00:37:35.988616 systemd-logind[1566]: Session 75 logged out. Waiting for processes to exit. Sep 13 00:37:35.991022 systemd-logind[1566]: Removed session 75. Sep 13 00:37:41.144648 systemd[1]: Started sshd@333-78.46.184.112:22-147.75.109.163:45702.service - OpenSSH per-connection server daemon (147.75.109.163:45702). Sep 13 00:37:42.123121 sshd[9813]: Accepted publickey for core from 147.75.109.163 port 45702 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:42.125699 sshd[9813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:42.133949 systemd-logind[1566]: New session 76 of user core. Sep 13 00:37:42.141340 systemd[1]: Started session-76.scope - Session 76 of User core. Sep 13 00:37:42.888978 sshd[9813]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:42.897957 systemd[1]: sshd@333-78.46.184.112:22-147.75.109.163:45702.service: Deactivated successfully. Sep 13 00:37:42.902164 systemd[1]: session-76.scope: Deactivated successfully. Sep 13 00:37:42.902651 systemd-logind[1566]: Session 76 logged out. Waiting for processes to exit. Sep 13 00:37:42.905802 systemd-logind[1566]: Removed session 76. Sep 13 00:37:48.062681 systemd[1]: Started sshd@334-78.46.184.112:22-147.75.109.163:45708.service - OpenSSH per-connection server daemon (147.75.109.163:45708). Sep 13 00:37:49.044818 sshd[9847]: Accepted publickey for core from 147.75.109.163 port 45708 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:49.047693 sshd[9847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:49.058933 systemd-logind[1566]: New session 77 of user core. Sep 13 00:37:49.064820 systemd[1]: Started session-77.scope - Session 77 of User core. Sep 13 00:37:49.833943 sshd[9847]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:49.839825 systemd[1]: sshd@334-78.46.184.112:22-147.75.109.163:45708.service: Deactivated successfully. Sep 13 00:37:49.847751 systemd[1]: session-77.scope: Deactivated successfully. Sep 13 00:37:49.849494 systemd-logind[1566]: Session 77 logged out. Waiting for processes to exit. Sep 13 00:37:49.851012 systemd-logind[1566]: Removed session 77. Sep 13 00:37:55.003930 systemd[1]: Started sshd@335-78.46.184.112:22-147.75.109.163:39960.service - OpenSSH per-connection server daemon (147.75.109.163:39960). Sep 13 00:37:55.996959 sshd[9921]: Accepted publickey for core from 147.75.109.163 port 39960 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:37:55.998389 sshd[9921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:37:56.010585 systemd-logind[1566]: New session 78 of user core. Sep 13 00:37:56.017946 systemd[1]: Started session-78.scope - Session 78 of User core. Sep 13 00:37:56.805889 sshd[9921]: pam_unix(sshd:session): session closed for user core Sep 13 00:37:56.813431 systemd-logind[1566]: Session 78 logged out. Waiting for processes to exit. Sep 13 00:37:56.813983 systemd[1]: sshd@335-78.46.184.112:22-147.75.109.163:39960.service: Deactivated successfully. Sep 13 00:37:56.824400 systemd[1]: session-78.scope: Deactivated successfully. Sep 13 00:37:56.829346 systemd-logind[1566]: Removed session 78. Sep 13 00:38:01.969975 systemd[1]: Started sshd@336-78.46.184.112:22-147.75.109.163:48468.service - OpenSSH per-connection server daemon (147.75.109.163:48468). Sep 13 00:38:02.963347 sshd[9959]: Accepted publickey for core from 147.75.109.163 port 48468 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:02.966384 sshd[9959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:02.976239 systemd-logind[1566]: New session 79 of user core. Sep 13 00:38:02.986066 systemd[1]: Started session-79.scope - Session 79 of User core. Sep 13 00:38:03.756192 sshd[9959]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:03.763780 systemd[1]: sshd@336-78.46.184.112:22-147.75.109.163:48468.service: Deactivated successfully. Sep 13 00:38:03.765745 systemd-logind[1566]: Session 79 logged out. Waiting for processes to exit. Sep 13 00:38:03.773603 systemd[1]: session-79.scope: Deactivated successfully. Sep 13 00:38:03.776710 systemd-logind[1566]: Removed session 79. Sep 13 00:38:08.933351 systemd[1]: Started sshd@337-78.46.184.112:22-147.75.109.163:48476.service - OpenSSH per-connection server daemon (147.75.109.163:48476). Sep 13 00:38:09.934294 sshd[9973]: Accepted publickey for core from 147.75.109.163 port 48476 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:09.938093 sshd[9973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:09.946741 systemd-logind[1566]: New session 80 of user core. Sep 13 00:38:09.955423 systemd[1]: Started session-80.scope - Session 80 of User core. Sep 13 00:38:10.716710 sshd[9973]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:10.728282 systemd[1]: sshd@337-78.46.184.112:22-147.75.109.163:48476.service: Deactivated successfully. Sep 13 00:38:10.741553 systemd[1]: session-80.scope: Deactivated successfully. Sep 13 00:38:10.746000 systemd-logind[1566]: Session 80 logged out. Waiting for processes to exit. Sep 13 00:38:10.747280 systemd-logind[1566]: Removed session 80. Sep 13 00:38:10.897955 systemd[1]: Started sshd@338-78.46.184.112:22-147.75.109.163:54168.service - OpenSSH per-connection server daemon (147.75.109.163:54168). Sep 13 00:38:11.880790 sshd[9988]: Accepted publickey for core from 147.75.109.163 port 54168 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:11.884620 sshd[9988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:11.891408 systemd-logind[1566]: New session 81 of user core. Sep 13 00:38:11.895864 systemd[1]: Started session-81.scope - Session 81 of User core. Sep 13 00:38:12.883450 sshd[9988]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:12.892544 systemd[1]: sshd@338-78.46.184.112:22-147.75.109.163:54168.service: Deactivated successfully. Sep 13 00:38:12.903344 systemd-logind[1566]: Session 81 logged out. Waiting for processes to exit. Sep 13 00:38:12.904201 systemd[1]: session-81.scope: Deactivated successfully. Sep 13 00:38:12.912943 systemd-logind[1566]: Removed session 81. Sep 13 00:38:13.056900 systemd[1]: Started sshd@339-78.46.184.112:22-147.75.109.163:54172.service - OpenSSH per-connection server daemon (147.75.109.163:54172). Sep 13 00:38:14.088409 sshd[10000]: Accepted publickey for core from 147.75.109.163 port 54172 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:14.094429 sshd[10000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:14.102532 systemd-logind[1566]: New session 82 of user core. Sep 13 00:38:14.109269 systemd[1]: Started session-82.scope - Session 82 of User core. Sep 13 00:38:16.984366 sshd[10000]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:16.995369 systemd-logind[1566]: Session 82 logged out. Waiting for processes to exit. Sep 13 00:38:16.999085 systemd[1]: sshd@339-78.46.184.112:22-147.75.109.163:54172.service: Deactivated successfully. Sep 13 00:38:17.007748 systemd[1]: session-82.scope: Deactivated successfully. Sep 13 00:38:17.009218 systemd-logind[1566]: Removed session 82. Sep 13 00:38:17.165814 systemd[1]: Started sshd@340-78.46.184.112:22-147.75.109.163:54180.service - OpenSSH per-connection server daemon (147.75.109.163:54180). Sep 13 00:38:18.212034 sshd[10021]: Accepted publickey for core from 147.75.109.163 port 54180 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:18.215191 sshd[10021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:18.237731 systemd-logind[1566]: New session 83 of user core. Sep 13 00:38:18.248970 systemd[1]: Started session-83.scope - Session 83 of User core. Sep 13 00:38:19.166982 sshd[10021]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:19.176303 systemd-logind[1566]: Session 83 logged out. Waiting for processes to exit. Sep 13 00:38:19.176993 systemd[1]: sshd@340-78.46.184.112:22-147.75.109.163:54180.service: Deactivated successfully. Sep 13 00:38:19.181045 systemd[1]: session-83.scope: Deactivated successfully. Sep 13 00:38:19.185315 systemd-logind[1566]: Removed session 83. Sep 13 00:38:19.337088 systemd[1]: Started sshd@341-78.46.184.112:22-147.75.109.163:54194.service - OpenSSH per-connection server daemon (147.75.109.163:54194). Sep 13 00:38:20.335889 sshd[10033]: Accepted publickey for core from 147.75.109.163 port 54194 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:20.338678 sshd[10033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:20.350100 systemd-logind[1566]: New session 84 of user core. Sep 13 00:38:20.356184 systemd[1]: Started session-84.scope - Session 84 of User core. Sep 13 00:38:21.117793 sshd[10033]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:21.123645 systemd[1]: sshd@341-78.46.184.112:22-147.75.109.163:54194.service: Deactivated successfully. Sep 13 00:38:21.129098 systemd[1]: session-84.scope: Deactivated successfully. Sep 13 00:38:21.129878 systemd-logind[1566]: Session 84 logged out. Waiting for processes to exit. Sep 13 00:38:21.131974 systemd-logind[1566]: Removed session 84. Sep 13 00:38:26.288002 systemd[1]: Started sshd@342-78.46.184.112:22-147.75.109.163:60876.service - OpenSSH per-connection server daemon (147.75.109.163:60876). Sep 13 00:38:27.280753 sshd[10109]: Accepted publickey for core from 147.75.109.163 port 60876 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:27.286410 sshd[10109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:27.301253 systemd-logind[1566]: New session 85 of user core. Sep 13 00:38:27.310789 systemd[1]: Started session-85.scope - Session 85 of User core. Sep 13 00:38:28.059742 sshd[10109]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:28.068482 systemd[1]: sshd@342-78.46.184.112:22-147.75.109.163:60876.service: Deactivated successfully. Sep 13 00:38:28.082124 systemd[1]: session-85.scope: Deactivated successfully. Sep 13 00:38:28.084937 systemd-logind[1566]: Session 85 logged out. Waiting for processes to exit. Sep 13 00:38:28.087167 systemd-logind[1566]: Removed session 85. Sep 13 00:38:33.241614 systemd[1]: Started sshd@343-78.46.184.112:22-147.75.109.163:35928.service - OpenSSH per-connection server daemon (147.75.109.163:35928). Sep 13 00:38:34.259447 sshd[10125]: Accepted publickey for core from 147.75.109.163 port 35928 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:34.262554 sshd[10125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:34.269211 systemd-logind[1566]: New session 86 of user core. Sep 13 00:38:34.274872 systemd[1]: Started session-86.scope - Session 86 of User core. Sep 13 00:38:35.032052 sshd[10125]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:35.037114 systemd[1]: sshd@343-78.46.184.112:22-147.75.109.163:35928.service: Deactivated successfully. Sep 13 00:38:35.042254 systemd-logind[1566]: Session 86 logged out. Waiting for processes to exit. Sep 13 00:38:35.042629 systemd[1]: session-86.scope: Deactivated successfully. Sep 13 00:38:35.045185 systemd-logind[1566]: Removed session 86. Sep 13 00:38:40.199009 systemd[1]: Started sshd@344-78.46.184.112:22-147.75.109.163:37662.service - OpenSSH per-connection server daemon (147.75.109.163:37662). Sep 13 00:38:41.189084 sshd[10138]: Accepted publickey for core from 147.75.109.163 port 37662 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:41.192247 sshd[10138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:41.198585 systemd-logind[1566]: New session 87 of user core. Sep 13 00:38:41.204911 systemd[1]: Started session-87.scope - Session 87 of User core. Sep 13 00:38:41.991999 sshd[10138]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:41.997819 systemd[1]: sshd@344-78.46.184.112:22-147.75.109.163:37662.service: Deactivated successfully. Sep 13 00:38:42.003299 systemd[1]: session-87.scope: Deactivated successfully. Sep 13 00:38:42.006055 systemd-logind[1566]: Session 87 logged out. Waiting for processes to exit. Sep 13 00:38:42.008006 systemd-logind[1566]: Removed session 87. Sep 13 00:38:47.157004 systemd[1]: Started sshd@345-78.46.184.112:22-147.75.109.163:37674.service - OpenSSH per-connection server daemon (147.75.109.163:37674). Sep 13 00:38:48.146522 sshd[10176]: Accepted publickey for core from 147.75.109.163 port 37674 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:48.148537 sshd[10176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:48.156093 systemd-logind[1566]: New session 88 of user core. Sep 13 00:38:48.161139 systemd[1]: Started session-88.scope - Session 88 of User core. Sep 13 00:38:48.758162 systemd[1]: run-containerd-runc-k8s.io-7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7-runc.WvkxR0.mount: Deactivated successfully. Sep 13 00:38:48.935177 sshd[10176]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:48.939994 systemd[1]: sshd@345-78.46.184.112:22-147.75.109.163:37674.service: Deactivated successfully. Sep 13 00:38:48.948184 systemd[1]: session-88.scope: Deactivated successfully. Sep 13 00:38:48.948245 systemd-logind[1566]: Session 88 logged out. Waiting for processes to exit. Sep 13 00:38:48.951722 systemd-logind[1566]: Removed session 88. Sep 13 00:38:54.114150 systemd[1]: Started sshd@346-78.46.184.112:22-147.75.109.163:33990.service - OpenSSH per-connection server daemon (147.75.109.163:33990). Sep 13 00:38:55.100724 sshd[10250]: Accepted publickey for core from 147.75.109.163 port 33990 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:38:55.106877 sshd[10250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:38:55.116883 systemd-logind[1566]: New session 89 of user core. Sep 13 00:38:55.129180 systemd[1]: Started session-89.scope - Session 89 of User core. Sep 13 00:38:55.882858 sshd[10250]: pam_unix(sshd:session): session closed for user core Sep 13 00:38:55.891632 systemd-logind[1566]: Session 89 logged out. Waiting for processes to exit. Sep 13 00:38:55.892554 systemd[1]: sshd@346-78.46.184.112:22-147.75.109.163:33990.service: Deactivated successfully. Sep 13 00:38:55.898170 systemd[1]: session-89.scope: Deactivated successfully. Sep 13 00:38:55.900006 systemd-logind[1566]: Removed session 89. Sep 13 00:39:01.059258 systemd[1]: Started sshd@347-78.46.184.112:22-147.75.109.163:37080.service - OpenSSH per-connection server daemon (147.75.109.163:37080). Sep 13 00:39:02.082185 sshd[10302]: Accepted publickey for core from 147.75.109.163 port 37080 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:02.084993 sshd[10302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:02.091510 systemd-logind[1566]: New session 90 of user core. Sep 13 00:39:02.095383 systemd[1]: Started session-90.scope - Session 90 of User core. Sep 13 00:39:02.855820 sshd[10302]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:02.863414 systemd[1]: sshd@347-78.46.184.112:22-147.75.109.163:37080.service: Deactivated successfully. Sep 13 00:39:02.864248 systemd-logind[1566]: Session 90 logged out. Waiting for processes to exit. Sep 13 00:39:02.871893 systemd[1]: session-90.scope: Deactivated successfully. Sep 13 00:39:02.874539 systemd-logind[1566]: Removed session 90. Sep 13 00:39:08.026181 systemd[1]: Started sshd@348-78.46.184.112:22-147.75.109.163:37082.service - OpenSSH per-connection server daemon (147.75.109.163:37082). Sep 13 00:39:09.045877 sshd[10323]: Accepted publickey for core from 147.75.109.163 port 37082 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:09.050827 sshd[10323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:09.060911 systemd-logind[1566]: New session 91 of user core. Sep 13 00:39:09.071227 systemd[1]: Started session-91.scope - Session 91 of User core. Sep 13 00:39:09.845193 sshd[10323]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:09.853884 systemd[1]: sshd@348-78.46.184.112:22-147.75.109.163:37082.service: Deactivated successfully. Sep 13 00:39:09.857754 systemd-logind[1566]: Session 91 logged out. Waiting for processes to exit. Sep 13 00:39:09.861355 systemd[1]: session-91.scope: Deactivated successfully. Sep 13 00:39:09.867792 systemd-logind[1566]: Removed session 91. Sep 13 00:39:15.013027 systemd[1]: Started sshd@349-78.46.184.112:22-147.75.109.163:43690.service - OpenSSH per-connection server daemon (147.75.109.163:43690). Sep 13 00:39:16.008526 sshd[10337]: Accepted publickey for core from 147.75.109.163 port 43690 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:16.011829 sshd[10337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:16.020633 systemd-logind[1566]: New session 92 of user core. Sep 13 00:39:16.027618 systemd[1]: Started session-92.scope - Session 92 of User core. Sep 13 00:39:16.804421 sshd[10337]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:16.811225 systemd-logind[1566]: Session 92 logged out. Waiting for processes to exit. Sep 13 00:39:16.816096 systemd[1]: sshd@349-78.46.184.112:22-147.75.109.163:43690.service: Deactivated successfully. Sep 13 00:39:16.824688 systemd[1]: session-92.scope: Deactivated successfully. Sep 13 00:39:16.829340 systemd-logind[1566]: Removed session 92. Sep 13 00:39:21.977129 systemd[1]: Started sshd@350-78.46.184.112:22-147.75.109.163:39730.service - OpenSSH per-connection server daemon (147.75.109.163:39730). Sep 13 00:39:22.992175 sshd[10353]: Accepted publickey for core from 147.75.109.163 port 39730 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:22.995626 sshd[10353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:23.005807 systemd-logind[1566]: New session 93 of user core. Sep 13 00:39:23.015537 systemd[1]: Started session-93.scope - Session 93 of User core. Sep 13 00:39:23.787820 sshd[10353]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:23.799196 systemd[1]: sshd@350-78.46.184.112:22-147.75.109.163:39730.service: Deactivated successfully. Sep 13 00:39:23.806214 systemd[1]: session-93.scope: Deactivated successfully. Sep 13 00:39:23.808250 systemd-logind[1566]: Session 93 logged out. Waiting for processes to exit. Sep 13 00:39:23.813567 systemd-logind[1566]: Removed session 93. Sep 13 00:39:28.967891 systemd[1]: Started sshd@351-78.46.184.112:22-147.75.109.163:39738.service - OpenSSH per-connection server daemon (147.75.109.163:39738). Sep 13 00:39:30.016699 sshd[10428]: Accepted publickey for core from 147.75.109.163 port 39738 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:30.020439 sshd[10428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:30.036914 systemd-logind[1566]: New session 94 of user core. Sep 13 00:39:30.043451 systemd[1]: Started session-94.scope - Session 94 of User core. Sep 13 00:39:30.911816 sshd[10428]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:30.924639 systemd-logind[1566]: Session 94 logged out. Waiting for processes to exit. Sep 13 00:39:30.926829 systemd[1]: sshd@351-78.46.184.112:22-147.75.109.163:39738.service: Deactivated successfully. Sep 13 00:39:30.934248 systemd[1]: session-94.scope: Deactivated successfully. Sep 13 00:39:30.940516 systemd-logind[1566]: Removed session 94. Sep 13 00:39:36.087022 systemd[1]: Started sshd@352-78.46.184.112:22-147.75.109.163:42298.service - OpenSSH per-connection server daemon (147.75.109.163:42298). Sep 13 00:39:37.072099 sshd[10443]: Accepted publickey for core from 147.75.109.163 port 42298 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:37.076925 sshd[10443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:37.085522 systemd-logind[1566]: New session 95 of user core. Sep 13 00:39:37.090992 systemd[1]: Started session-95.scope - Session 95 of User core. Sep 13 00:39:37.894355 sshd[10443]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:37.899440 systemd[1]: sshd@352-78.46.184.112:22-147.75.109.163:42298.service: Deactivated successfully. Sep 13 00:39:37.900808 systemd-logind[1566]: Session 95 logged out. Waiting for processes to exit. Sep 13 00:39:37.906414 systemd[1]: session-95.scope: Deactivated successfully. Sep 13 00:39:37.909105 systemd-logind[1566]: Removed session 95. Sep 13 00:39:43.061032 systemd[1]: Started sshd@353-78.46.184.112:22-147.75.109.163:37918.service - OpenSSH per-connection server daemon (147.75.109.163:37918). Sep 13 00:39:44.058286 sshd[10457]: Accepted publickey for core from 147.75.109.163 port 37918 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:44.064063 sshd[10457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:44.072543 systemd-logind[1566]: New session 96 of user core. Sep 13 00:39:44.078823 systemd[1]: Started session-96.scope - Session 96 of User core. Sep 13 00:39:44.839904 sshd[10457]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:44.845910 systemd[1]: sshd@353-78.46.184.112:22-147.75.109.163:37918.service: Deactivated successfully. Sep 13 00:39:44.851687 systemd[1]: session-96.scope: Deactivated successfully. Sep 13 00:39:44.853079 systemd-logind[1566]: Session 96 logged out. Waiting for processes to exit. Sep 13 00:39:44.855678 systemd-logind[1566]: Removed session 96. Sep 13 00:39:50.007438 systemd[1]: Started sshd@354-78.46.184.112:22-147.75.109.163:37920.service - OpenSSH per-connection server daemon (147.75.109.163:37920). Sep 13 00:39:50.987762 sshd[10510]: Accepted publickey for core from 147.75.109.163 port 37920 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:50.991823 sshd[10510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:51.004315 systemd-logind[1566]: New session 97 of user core. Sep 13 00:39:51.010169 systemd[1]: Started session-97.scope - Session 97 of User core. Sep 13 00:39:51.756767 sshd[10510]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:51.764660 systemd[1]: sshd@354-78.46.184.112:22-147.75.109.163:37920.service: Deactivated successfully. Sep 13 00:39:51.770130 systemd[1]: session-97.scope: Deactivated successfully. Sep 13 00:39:51.770219 systemd-logind[1566]: Session 97 logged out. Waiting for processes to exit. Sep 13 00:39:51.773220 systemd-logind[1566]: Removed session 97. Sep 13 00:39:56.930608 systemd[1]: Started sshd@355-78.46.184.112:22-147.75.109.163:49032.service - OpenSSH per-connection server daemon (147.75.109.163:49032). Sep 13 00:39:57.909487 sshd[10587]: Accepted publickey for core from 147.75.109.163 port 49032 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:39:57.917121 sshd[10587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:39:57.931441 systemd-logind[1566]: New session 98 of user core. Sep 13 00:39:57.939031 systemd[1]: Started session-98.scope - Session 98 of User core. Sep 13 00:39:58.700057 sshd[10587]: pam_unix(sshd:session): session closed for user core Sep 13 00:39:58.705902 systemd[1]: sshd@355-78.46.184.112:22-147.75.109.163:49032.service: Deactivated successfully. Sep 13 00:39:58.712892 systemd[1]: session-98.scope: Deactivated successfully. Sep 13 00:39:58.714106 systemd-logind[1566]: Session 98 logged out. Waiting for processes to exit. Sep 13 00:39:58.715250 systemd-logind[1566]: Removed session 98. Sep 13 00:40:03.875215 systemd[1]: Started sshd@356-78.46.184.112:22-147.75.109.163:37676.service - OpenSSH per-connection server daemon (147.75.109.163:37676). Sep 13 00:40:04.893143 sshd[10603]: Accepted publickey for core from 147.75.109.163 port 37676 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:40:04.893929 sshd[10603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:40:04.900982 systemd-logind[1566]: New session 99 of user core. Sep 13 00:40:04.907006 systemd[1]: Started session-99.scope - Session 99 of User core. Sep 13 00:40:05.693204 sshd[10603]: pam_unix(sshd:session): session closed for user core Sep 13 00:40:05.706817 systemd[1]: sshd@356-78.46.184.112:22-147.75.109.163:37676.service: Deactivated successfully. Sep 13 00:40:05.713441 systemd[1]: session-99.scope: Deactivated successfully. Sep 13 00:40:05.716138 systemd-logind[1566]: Session 99 logged out. Waiting for processes to exit. Sep 13 00:40:05.718085 systemd-logind[1566]: Removed session 99. Sep 13 00:40:10.869091 systemd[1]: Started sshd@357-78.46.184.112:22-147.75.109.163:42740.service - OpenSSH per-connection server daemon (147.75.109.163:42740). Sep 13 00:40:11.916290 sshd[10617]: Accepted publickey for core from 147.75.109.163 port 42740 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:40:11.919512 sshd[10617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:40:11.931653 systemd-logind[1566]: New session 100 of user core. Sep 13 00:40:11.939964 systemd[1]: Started session-100.scope - Session 100 of User core. Sep 13 00:40:12.755921 sshd[10617]: pam_unix(sshd:session): session closed for user core Sep 13 00:40:12.763081 systemd-logind[1566]: Session 100 logged out. Waiting for processes to exit. Sep 13 00:40:12.765402 systemd[1]: sshd@357-78.46.184.112:22-147.75.109.163:42740.service: Deactivated successfully. Sep 13 00:40:12.773537 systemd[1]: session-100.scope: Deactivated successfully. Sep 13 00:40:12.778554 systemd-logind[1566]: Removed session 100. Sep 13 00:40:16.247929 systemd[1]: Started sshd@358-78.46.184.112:22-141.98.11.68:61438.service - OpenSSH per-connection server daemon (141.98.11.68:61438). Sep 13 00:40:16.563103 sshd[10631]: Received disconnect from 141.98.11.68 port 61438:11: [preauth] Sep 13 00:40:16.563103 sshd[10631]: Disconnected from 141.98.11.68 port 61438 [preauth] Sep 13 00:40:16.565716 systemd[1]: sshd@358-78.46.184.112:22-141.98.11.68:61438.service: Deactivated successfully. Sep 13 00:40:17.927052 systemd[1]: Started sshd@359-78.46.184.112:22-147.75.109.163:42754.service - OpenSSH per-connection server daemon (147.75.109.163:42754). Sep 13 00:40:18.929223 sshd[10636]: Accepted publickey for core from 147.75.109.163 port 42754 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:40:18.933354 sshd[10636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:40:18.945189 systemd-logind[1566]: New session 101 of user core. Sep 13 00:40:18.952048 systemd[1]: Started session-101.scope - Session 101 of User core. Sep 13 00:40:19.716968 sshd[10636]: pam_unix(sshd:session): session closed for user core Sep 13 00:40:19.727937 systemd-logind[1566]: Session 101 logged out. Waiting for processes to exit. Sep 13 00:40:19.728929 systemd[1]: sshd@359-78.46.184.112:22-147.75.109.163:42754.service: Deactivated successfully. Sep 13 00:40:19.735153 systemd[1]: session-101.scope: Deactivated successfully. Sep 13 00:40:19.739962 systemd-logind[1566]: Removed session 101. Sep 13 00:40:24.895119 systemd[1]: Started sshd@360-78.46.184.112:22-147.75.109.163:50144.service - OpenSSH per-connection server daemon (147.75.109.163:50144). Sep 13 00:40:25.904144 sshd[10693]: Accepted publickey for core from 147.75.109.163 port 50144 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:40:25.907295 sshd[10693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:40:25.919222 systemd-logind[1566]: New session 102 of user core. Sep 13 00:40:25.928694 systemd[1]: Started session-102.scope - Session 102 of User core. Sep 13 00:40:26.686841 sshd[10693]: pam_unix(sshd:session): session closed for user core Sep 13 00:40:26.692731 systemd[1]: sshd@360-78.46.184.112:22-147.75.109.163:50144.service: Deactivated successfully. Sep 13 00:40:26.693177 systemd-logind[1566]: Session 102 logged out. Waiting for processes to exit. Sep 13 00:40:26.702581 systemd[1]: session-102.scope: Deactivated successfully. Sep 13 00:40:26.707680 systemd-logind[1566]: Removed session 102. Sep 13 00:40:31.851933 systemd[1]: Started sshd@361-78.46.184.112:22-147.75.109.163:52434.service - OpenSSH per-connection server daemon (147.75.109.163:52434). Sep 13 00:40:32.844512 sshd[10745]: Accepted publickey for core from 147.75.109.163 port 52434 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:40:32.852417 sshd[10745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:40:32.863840 systemd-logind[1566]: New session 103 of user core. Sep 13 00:40:32.869915 systemd[1]: Started session-103.scope - Session 103 of User core. Sep 13 00:40:33.622353 sshd[10745]: pam_unix(sshd:session): session closed for user core Sep 13 00:40:33.632948 systemd-logind[1566]: Session 103 logged out. Waiting for processes to exit. Sep 13 00:40:33.634567 systemd[1]: sshd@361-78.46.184.112:22-147.75.109.163:52434.service: Deactivated successfully. Sep 13 00:40:33.643882 systemd[1]: session-103.scope: Deactivated successfully. Sep 13 00:40:33.646128 systemd-logind[1566]: Removed session 103. Sep 13 00:40:38.803250 systemd[1]: Started sshd@362-78.46.184.112:22-147.75.109.163:52438.service - OpenSSH per-connection server daemon (147.75.109.163:52438). Sep 13 00:40:39.854829 sshd[10766]: Accepted publickey for core from 147.75.109.163 port 52438 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:40:39.859663 sshd[10766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:40:39.871003 systemd-logind[1566]: New session 104 of user core. Sep 13 00:40:39.875937 systemd[1]: Started session-104.scope - Session 104 of User core. Sep 13 00:40:40.680609 sshd[10766]: pam_unix(sshd:session): session closed for user core Sep 13 00:40:40.685684 systemd[1]: sshd@362-78.46.184.112:22-147.75.109.163:52438.service: Deactivated successfully. Sep 13 00:40:40.694778 systemd[1]: session-104.scope: Deactivated successfully. Sep 13 00:40:40.696595 systemd-logind[1566]: Session 104 logged out. Waiting for processes to exit. Sep 13 00:40:40.698550 systemd-logind[1566]: Removed session 104. Sep 13 00:40:45.849824 systemd[1]: Started sshd@363-78.46.184.112:22-147.75.109.163:59552.service - OpenSSH per-connection server daemon (147.75.109.163:59552). Sep 13 00:40:46.837519 sshd[10800]: Accepted publickey for core from 147.75.109.163 port 59552 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:40:46.839714 sshd[10800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:40:46.844869 systemd-logind[1566]: New session 105 of user core. Sep 13 00:40:46.851988 systemd[1]: Started session-105.scope - Session 105 of User core. Sep 13 00:40:47.597923 sshd[10800]: pam_unix(sshd:session): session closed for user core Sep 13 00:40:47.605556 systemd[1]: sshd@363-78.46.184.112:22-147.75.109.163:59552.service: Deactivated successfully. Sep 13 00:40:47.611448 systemd[1]: session-105.scope: Deactivated successfully. Sep 13 00:40:47.612837 systemd-logind[1566]: Session 105 logged out. Waiting for processes to exit. Sep 13 00:40:47.614247 systemd-logind[1566]: Removed session 105. Sep 13 00:40:48.758109 systemd[1]: run-containerd-runc-k8s.io-7d860d2bb4e69d2c796c17ea1ee5d6f84d9c13b727135a63f63e126feea2ddb7-runc.ICtDa9.mount: Deactivated successfully. Sep 13 00:40:52.764831 systemd[1]: Started sshd@364-78.46.184.112:22-147.75.109.163:38150.service - OpenSSH per-connection server daemon (147.75.109.163:38150). Sep 13 00:40:53.754139 sshd[10833]: Accepted publickey for core from 147.75.109.163 port 38150 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:40:53.756769 sshd[10833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:40:53.762771 systemd-logind[1566]: New session 106 of user core. Sep 13 00:40:53.766951 systemd[1]: Started session-106.scope - Session 106 of User core. Sep 13 00:40:54.528111 sshd[10833]: pam_unix(sshd:session): session closed for user core Sep 13 00:40:54.534452 systemd[1]: sshd@364-78.46.184.112:22-147.75.109.163:38150.service: Deactivated successfully. Sep 13 00:40:54.540662 systemd[1]: session-106.scope: Deactivated successfully. Sep 13 00:40:54.542860 systemd-logind[1566]: Session 106 logged out. Waiting for processes to exit. Sep 13 00:40:54.543962 systemd-logind[1566]: Removed session 106. Sep 13 00:40:55.791104 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Sep 13 00:40:55.844733 systemd-tmpfiles[10907]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:40:55.846037 systemd-tmpfiles[10907]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:40:55.849644 systemd-tmpfiles[10907]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:40:55.851066 systemd-tmpfiles[10907]: ACLs are not supported, ignoring. Sep 13 00:40:55.851307 systemd-tmpfiles[10907]: ACLs are not supported, ignoring. Sep 13 00:40:55.862544 systemd-tmpfiles[10907]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:40:55.862558 systemd-tmpfiles[10907]: Skipping /boot Sep 13 00:40:55.876682 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Sep 13 00:40:55.880022 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Sep 13 00:40:59.703994 systemd[1]: Started sshd@365-78.46.184.112:22-147.75.109.163:38158.service - OpenSSH per-connection server daemon (147.75.109.163:38158). Sep 13 00:41:00.688484 sshd[10919]: Accepted publickey for core from 147.75.109.163 port 38158 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:00.690132 sshd[10919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:00.700714 systemd-logind[1566]: New session 107 of user core. Sep 13 00:41:00.707904 systemd[1]: Started session-107.scope - Session 107 of User core. Sep 13 00:41:01.445839 sshd[10919]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:01.453449 systemd[1]: sshd@365-78.46.184.112:22-147.75.109.163:38158.service: Deactivated successfully. Sep 13 00:41:01.466220 systemd[1]: session-107.scope: Deactivated successfully. Sep 13 00:41:01.468418 systemd-logind[1566]: Session 107 logged out. Waiting for processes to exit. Sep 13 00:41:01.470197 systemd-logind[1566]: Removed session 107. Sep 13 00:41:03.197133 kernel: hrtimer: interrupt took 2243199 ns Sep 13 00:41:06.615003 systemd[1]: Started sshd@366-78.46.184.112:22-147.75.109.163:59602.service - OpenSSH per-connection server daemon (147.75.109.163:59602). Sep 13 00:41:07.617023 sshd[10934]: Accepted publickey for core from 147.75.109.163 port 59602 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:07.621313 sshd[10934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:07.634313 systemd-logind[1566]: New session 108 of user core. Sep 13 00:41:07.642240 systemd[1]: Started session-108.scope - Session 108 of User core. Sep 13 00:41:08.402911 sshd[10934]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:08.415379 systemd[1]: sshd@366-78.46.184.112:22-147.75.109.163:59602.service: Deactivated successfully. Sep 13 00:41:08.420263 systemd[1]: session-108.scope: Deactivated successfully. Sep 13 00:41:08.422034 systemd-logind[1566]: Session 108 logged out. Waiting for processes to exit. Sep 13 00:41:08.423877 systemd-logind[1566]: Removed session 108. Sep 13 00:41:13.570900 systemd[1]: Started sshd@367-78.46.184.112:22-147.75.109.163:60170.service - OpenSSH per-connection server daemon (147.75.109.163:60170). Sep 13 00:41:14.559635 sshd[10948]: Accepted publickey for core from 147.75.109.163 port 60170 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:14.562704 sshd[10948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:14.568201 systemd-logind[1566]: New session 109 of user core. Sep 13 00:41:14.577918 systemd[1]: Started session-109.scope - Session 109 of User core. Sep 13 00:41:15.333419 sshd[10948]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:15.339387 systemd-logind[1566]: Session 109 logged out. Waiting for processes to exit. Sep 13 00:41:15.339967 systemd[1]: sshd@367-78.46.184.112:22-147.75.109.163:60170.service: Deactivated successfully. Sep 13 00:41:15.345365 systemd[1]: session-109.scope: Deactivated successfully. Sep 13 00:41:15.347277 systemd-logind[1566]: Removed session 109. Sep 13 00:41:20.501905 systemd[1]: Started sshd@368-78.46.184.112:22-147.75.109.163:45288.service - OpenSSH per-connection server daemon (147.75.109.163:45288). Sep 13 00:41:21.496607 sshd[10962]: Accepted publickey for core from 147.75.109.163 port 45288 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:21.499608 sshd[10962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:21.506541 systemd-logind[1566]: New session 110 of user core. Sep 13 00:41:21.510926 systemd[1]: Started session-110.scope - Session 110 of User core. Sep 13 00:41:22.266328 sshd[10962]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:22.271267 systemd[1]: sshd@368-78.46.184.112:22-147.75.109.163:45288.service: Deactivated successfully. Sep 13 00:41:22.273554 systemd-logind[1566]: Session 110 logged out. Waiting for processes to exit. Sep 13 00:41:22.279741 systemd[1]: session-110.scope: Deactivated successfully. Sep 13 00:41:22.281679 systemd-logind[1566]: Removed session 110. Sep 13 00:41:27.432824 systemd[1]: Started sshd@369-78.46.184.112:22-147.75.109.163:45304.service - OpenSSH per-connection server daemon (147.75.109.163:45304). Sep 13 00:41:28.427175 sshd[11038]: Accepted publickey for core from 147.75.109.163 port 45304 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:28.429832 sshd[11038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:28.436252 systemd-logind[1566]: New session 111 of user core. Sep 13 00:41:28.439902 systemd[1]: Started session-111.scope - Session 111 of User core. Sep 13 00:41:29.200701 sshd[11038]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:29.205658 systemd[1]: sshd@369-78.46.184.112:22-147.75.109.163:45304.service: Deactivated successfully. Sep 13 00:41:29.215655 systemd[1]: session-111.scope: Deactivated successfully. Sep 13 00:41:29.222495 systemd-logind[1566]: Session 111 logged out. Waiting for processes to exit. Sep 13 00:41:29.228363 systemd-logind[1566]: Removed session 111. Sep 13 00:41:34.368856 systemd[1]: Started sshd@370-78.46.184.112:22-147.75.109.163:58308.service - OpenSSH per-connection server daemon (147.75.109.163:58308). Sep 13 00:41:35.362414 sshd[11053]: Accepted publickey for core from 147.75.109.163 port 58308 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:35.365393 sshd[11053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:35.373730 systemd-logind[1566]: New session 112 of user core. Sep 13 00:41:35.378919 systemd[1]: Started session-112.scope - Session 112 of User core. Sep 13 00:41:36.119926 sshd[11053]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:36.125241 systemd[1]: sshd@370-78.46.184.112:22-147.75.109.163:58308.service: Deactivated successfully. Sep 13 00:41:36.131271 systemd[1]: session-112.scope: Deactivated successfully. Sep 13 00:41:36.133777 systemd-logind[1566]: Session 112 logged out. Waiting for processes to exit. Sep 13 00:41:36.136160 systemd-logind[1566]: Removed session 112. Sep 13 00:41:41.293043 systemd[1]: Started sshd@371-78.46.184.112:22-147.75.109.163:45934.service - OpenSSH per-connection server daemon (147.75.109.163:45934). Sep 13 00:41:42.292877 sshd[11067]: Accepted publickey for core from 147.75.109.163 port 45934 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:42.295756 sshd[11067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:42.304543 systemd-logind[1566]: New session 113 of user core. Sep 13 00:41:42.310125 systemd[1]: Started session-113.scope - Session 113 of User core. Sep 13 00:41:43.094929 sshd[11067]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:43.100377 systemd[1]: sshd@371-78.46.184.112:22-147.75.109.163:45934.service: Deactivated successfully. Sep 13 00:41:43.106224 systemd-logind[1566]: Session 113 logged out. Waiting for processes to exit. Sep 13 00:41:43.107326 systemd[1]: session-113.scope: Deactivated successfully. Sep 13 00:41:43.110443 systemd-logind[1566]: Removed session 113. Sep 13 00:41:48.268841 systemd[1]: Started sshd@372-78.46.184.112:22-147.75.109.163:45946.service - OpenSSH per-connection server daemon (147.75.109.163:45946). Sep 13 00:41:49.323377 sshd[11107]: Accepted publickey for core from 147.75.109.163 port 45946 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:49.326166 sshd[11107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:49.335046 systemd-logind[1566]: New session 114 of user core. Sep 13 00:41:49.343344 systemd[1]: Started session-114.scope - Session 114 of User core. Sep 13 00:41:50.156211 sshd[11107]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:50.165173 systemd[1]: sshd@372-78.46.184.112:22-147.75.109.163:45946.service: Deactivated successfully. Sep 13 00:41:50.174306 systemd[1]: session-114.scope: Deactivated successfully. Sep 13 00:41:50.176339 systemd-logind[1566]: Session 114 logged out. Waiting for processes to exit. Sep 13 00:41:50.179072 systemd-logind[1566]: Removed session 114. Sep 13 00:41:55.330105 systemd[1]: Started sshd@373-78.46.184.112:22-147.75.109.163:44776.service - OpenSSH per-connection server daemon (147.75.109.163:44776). Sep 13 00:41:55.786881 systemd[1]: run-containerd-runc-k8s.io-b004ea11e98ab831035ac43b8396cb21b60d80873e367e2892dbe910902cb68f-runc.1dftd0.mount: Deactivated successfully. Sep 13 00:41:56.311093 sshd[11179]: Accepted publickey for core from 147.75.109.163 port 44776 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:41:56.313525 sshd[11179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:41:56.322420 systemd-logind[1566]: New session 115 of user core. Sep 13 00:41:56.327064 systemd[1]: Started session-115.scope - Session 115 of User core. Sep 13 00:41:57.096584 sshd[11179]: pam_unix(sshd:session): session closed for user core Sep 13 00:41:57.102784 systemd[1]: sshd@373-78.46.184.112:22-147.75.109.163:44776.service: Deactivated successfully. Sep 13 00:41:57.111301 systemd[1]: session-115.scope: Deactivated successfully. Sep 13 00:41:57.113267 systemd-logind[1566]: Session 115 logged out. Waiting for processes to exit. Sep 13 00:41:57.115001 systemd-logind[1566]: Removed session 115. Sep 13 00:42:02.277595 systemd[1]: Started sshd@374-78.46.184.112:22-147.75.109.163:59002.service - OpenSSH per-connection server daemon (147.75.109.163:59002). Sep 13 00:42:03.265201 sshd[11231]: Accepted publickey for core from 147.75.109.163 port 59002 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:42:03.266040 sshd[11231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:42:03.275521 systemd-logind[1566]: New session 116 of user core. Sep 13 00:42:03.281551 systemd[1]: Started session-116.scope - Session 116 of User core. Sep 13 00:42:04.053788 sshd[11231]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:04.058082 systemd-logind[1566]: Session 116 logged out. Waiting for processes to exit. Sep 13 00:42:04.062246 systemd[1]: sshd@374-78.46.184.112:22-147.75.109.163:59002.service: Deactivated successfully. Sep 13 00:42:04.070891 systemd[1]: session-116.scope: Deactivated successfully. Sep 13 00:42:04.078380 systemd-logind[1566]: Removed session 116. Sep 13 00:42:09.229656 systemd[1]: Started sshd@375-78.46.184.112:22-147.75.109.163:59016.service - OpenSSH per-connection server daemon (147.75.109.163:59016). Sep 13 00:42:10.229249 sshd[11244]: Accepted publickey for core from 147.75.109.163 port 59016 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:42:10.233982 sshd[11244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:42:10.251308 systemd-logind[1566]: New session 117 of user core. Sep 13 00:42:10.256389 systemd[1]: Started session-117.scope - Session 117 of User core. Sep 13 00:42:11.031754 sshd[11244]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:11.039809 systemd[1]: sshd@375-78.46.184.112:22-147.75.109.163:59016.service: Deactivated successfully. Sep 13 00:42:11.050157 systemd[1]: session-117.scope: Deactivated successfully. Sep 13 00:42:11.053899 systemd-logind[1566]: Session 117 logged out. Waiting for processes to exit. Sep 13 00:42:11.058282 systemd-logind[1566]: Removed session 117. Sep 13 00:42:16.196077 systemd[1]: Started sshd@376-78.46.184.112:22-147.75.109.163:47350.service - OpenSSH per-connection server daemon (147.75.109.163:47350). Sep 13 00:42:17.197757 sshd[11265]: Accepted publickey for core from 147.75.109.163 port 47350 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:42:17.201349 sshd[11265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:42:17.210551 systemd-logind[1566]: New session 118 of user core. Sep 13 00:42:17.216263 systemd[1]: Started session-118.scope - Session 118 of User core. Sep 13 00:42:18.007890 sshd[11265]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:18.015098 systemd[1]: sshd@376-78.46.184.112:22-147.75.109.163:47350.service: Deactivated successfully. Sep 13 00:42:18.024498 systemd[1]: session-118.scope: Deactivated successfully. Sep 13 00:42:18.029012 systemd-logind[1566]: Session 118 logged out. Waiting for processes to exit. Sep 13 00:42:18.032364 systemd-logind[1566]: Removed session 118. Sep 13 00:42:23.180414 systemd[1]: Started sshd@377-78.46.184.112:22-147.75.109.163:47874.service - OpenSSH per-connection server daemon (147.75.109.163:47874). Sep 13 00:42:24.177973 sshd[11281]: Accepted publickey for core from 147.75.109.163 port 47874 ssh2: RSA SHA256:NhQZ2u4pNFNnTOyypgKfC/J5wpTyoVGrXq5/wUa/0SM Sep 13 00:42:24.181090 sshd[11281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:42:24.189413 systemd-logind[1566]: New session 119 of user core. Sep 13 00:42:24.191929 systemd[1]: Started session-119.scope - Session 119 of User core. Sep 13 00:42:24.964784 sshd[11281]: pam_unix(sshd:session): session closed for user core Sep 13 00:42:24.974806 systemd-logind[1566]: Session 119 logged out. Waiting for processes to exit. Sep 13 00:42:24.978294 systemd[1]: sshd@377-78.46.184.112:22-147.75.109.163:47874.service: Deactivated successfully. Sep 13 00:42:24.987970 systemd[1]: session-119.scope: Deactivated successfully. Sep 13 00:42:24.990422 systemd-logind[1566]: Removed session 119. Sep 13 00:42:41.231541 kubelet[2780]: E0913 00:42:41.230944 2780 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44266->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-5-n-c2bbffc425.1864b0ce6da8855d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-5-n-c2bbffc425,UID:9f08c5c26d31ec12b07d429f7ad9ffd8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-5-n-c2bbffc425,},FirstTimestamp:2025-09-13 00:42:35.230848349 +0000 UTC m=+913.581593913,LastTimestamp:2025-09-13 00:42:35.230848349 +0000 UTC m=+913.581593913,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-5-n-c2bbffc425,}" Sep 13 00:42:41.637167 containerd[1597]: time="2025-09-13T00:42:41.637086277Z" level=info msg="shim disconnected" id=17169d3958ad163de24a8df5f880fbc24d89b96af658eaa6b78db8f5a45e42de namespace=k8s.io Sep 13 00:42:41.637167 containerd[1597]: time="2025-09-13T00:42:41.637157879Z" level=warning msg="cleaning up after shim disconnected" id=17169d3958ad163de24a8df5f880fbc24d89b96af658eaa6b78db8f5a45e42de namespace=k8s.io Sep 13 00:42:41.637167 containerd[1597]: time="2025-09-13T00:42:41.637170519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:42:41.637707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17169d3958ad163de24a8df5f880fbc24d89b96af658eaa6b78db8f5a45e42de-rootfs.mount: Deactivated successfully. Sep 13 00:42:41.874780 kubelet[2780]: E0913 00:42:41.874122 2780 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44466->10.0.0.2:2379: read: connection timed out" Sep 13 00:42:41.923575 containerd[1597]: time="2025-09-13T00:42:41.922928081Z" level=info msg="shim disconnected" id=a7502de30146a52ac615acc1b2cc46dae34272494274ab6e360ad451aa3ba403 namespace=k8s.io Sep 13 00:42:41.923575 containerd[1597]: time="2025-09-13T00:42:41.923072124Z" level=warning msg="cleaning up after shim disconnected" id=a7502de30146a52ac615acc1b2cc46dae34272494274ab6e360ad451aa3ba403 namespace=k8s.io Sep 13 00:42:41.923575 containerd[1597]: time="2025-09-13T00:42:41.923089044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:42:41.925116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7502de30146a52ac615acc1b2cc46dae34272494274ab6e360ad451aa3ba403-rootfs.mount: Deactivated successfully. Sep 13 00:42:41.939967 containerd[1597]: time="2025-09-13T00:42:41.939890800Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:42:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:42:42.320240 containerd[1597]: time="2025-09-13T00:42:42.319373804Z" level=info msg="shim disconnected" id=99d9c3cb4c9721dd32a2b0cadd6deed91dfa286b1b5b315e1281af128e0226b9 namespace=k8s.io Sep 13 00:42:42.320240 containerd[1597]: time="2025-09-13T00:42:42.319439845Z" level=warning msg="cleaning up after shim disconnected" id=99d9c3cb4c9721dd32a2b0cadd6deed91dfa286b1b5b315e1281af128e0226b9 namespace=k8s.io Sep 13 00:42:42.320240 containerd[1597]: time="2025-09-13T00:42:42.319452045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:42:42.323555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99d9c3cb4c9721dd32a2b0cadd6deed91dfa286b1b5b315e1281af128e0226b9-rootfs.mount: Deactivated successfully. Sep 13 00:42:42.376808 kubelet[2780]: I0913 00:42:42.375532 2780 scope.go:117] "RemoveContainer" containerID="99d9c3cb4c9721dd32a2b0cadd6deed91dfa286b1b5b315e1281af128e0226b9" Sep 13 00:42:42.379039 kubelet[2780]: I0913 00:42:42.379006 2780 scope.go:117] "RemoveContainer" containerID="a7502de30146a52ac615acc1b2cc46dae34272494274ab6e360ad451aa3ba403" Sep 13 00:42:42.385250 containerd[1597]: time="2025-09-13T00:42:42.384262582Z" level=info msg="CreateContainer within sandbox \"206454aaa01d89eaeaeefde5318fe9ed8815eebc24278c218b01e467fbb9ea28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 13 00:42:42.388124 kubelet[2780]: I0913 00:42:42.386008 2780 scope.go:117] "RemoveContainer" containerID="17169d3958ad163de24a8df5f880fbc24d89b96af658eaa6b78db8f5a45e42de" Sep 13 00:42:42.390118 containerd[1597]: time="2025-09-13T00:42:42.389897728Z" level=info msg="CreateContainer within sandbox \"1b93c511ada9e7530422ef1587c4b4e5244ef866d86241349872a9d174381388\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 13 00:42:42.394286 containerd[1597]: time="2025-09-13T00:42:42.394045765Z" level=info msg="CreateContainer within sandbox \"877964b18c196be587c77b2006b9df9076348237012eb5419bef47d0b3cbfbfb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:42:42.412814 containerd[1597]: time="2025-09-13T00:42:42.412723956Z" level=info msg="CreateContainer within sandbox \"1b93c511ada9e7530422ef1587c4b4e5244ef866d86241349872a9d174381388\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"296d65a9471a8b42bacceb179026d7ae5904f3d8507f7c2f45de620b3e0ab303\"" Sep 13 00:42:42.415590 containerd[1597]: time="2025-09-13T00:42:42.415541169Z" level=info msg="StartContainer for \"296d65a9471a8b42bacceb179026d7ae5904f3d8507f7c2f45de620b3e0ab303\"" Sep 13 00:42:42.420030 containerd[1597]: time="2025-09-13T00:42:42.419981652Z" level=info msg="CreateContainer within sandbox \"206454aaa01d89eaeaeefde5318fe9ed8815eebc24278c218b01e467fbb9ea28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c8aedf44fd593928f6251813c227c5ae43811cbf7a2641b6ff66f359ce3bf7ed\"" Sep 13 00:42:42.421239 containerd[1597]: time="2025-09-13T00:42:42.421203515Z" level=info msg="StartContainer for \"c8aedf44fd593928f6251813c227c5ae43811cbf7a2641b6ff66f359ce3bf7ed\"" Sep 13 00:42:42.421932 containerd[1597]: time="2025-09-13T00:42:42.421610403Z" level=info msg="CreateContainer within sandbox \"877964b18c196be587c77b2006b9df9076348237012eb5419bef47d0b3cbfbfb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0968c6e5bb9b74c6fb1e3ef6e8c85107fadae52675e1c132d20b1fb6ff482cf8\"" Sep 13 00:42:42.422369 containerd[1597]: time="2025-09-13T00:42:42.422287936Z" level=info msg="StartContainer for \"0968c6e5bb9b74c6fb1e3ef6e8c85107fadae52675e1c132d20b1fb6ff482cf8\"" Sep 13 00:42:42.546094 containerd[1597]: time="2025-09-13T00:42:42.546041339Z" level=info msg="StartContainer for \"296d65a9471a8b42bacceb179026d7ae5904f3d8507f7c2f45de620b3e0ab303\" returns successfully" Sep 13 00:42:42.566509 containerd[1597]: time="2025-09-13T00:42:42.565879671Z" level=info msg="StartContainer for \"c8aedf44fd593928f6251813c227c5ae43811cbf7a2641b6ff66f359ce3bf7ed\" returns successfully" Sep 13 00:42:42.586619 containerd[1597]: time="2025-09-13T00:42:42.585984769Z" level=info msg="StartContainer for \"0968c6e5bb9b74c6fb1e3ef6e8c85107fadae52675e1c132d20b1fb6ff482cf8\" returns successfully"