May 10 00:18:55.876294 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 10 00:18:55.876320 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 10 00:18:55.876330 kernel: KASLR enabled May 10 00:18:55.876336 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 10 00:18:55.876342 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 10 00:18:55.876348 kernel: random: crng init done May 10 00:18:55.876355 kernel: ACPI: Early table checksum verification disabled May 10 00:18:55.876361 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 10 00:18:55.876368 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 10 00:18:55.876375 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876382 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876388 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876394 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876400 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876408 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876416 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876423 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876429 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:18:55.876436 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 10 00:18:55.876442 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 10 00:18:55.876449 kernel: NUMA: Failed to initialise from firmware May 10 00:18:55.876456 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 10 00:18:55.876462 kernel: NUMA: NODE_DATA [mem 0x139671800-0x139676fff] May 10 00:18:55.876468 kernel: Zone ranges: May 10 00:18:55.876475 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 10 00:18:55.876483 kernel: DMA32 empty May 10 00:18:55.876490 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 10 00:18:55.876496 kernel: Movable zone start for each node May 10 00:18:55.876502 kernel: Early memory node ranges May 10 00:18:55.876509 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 10 00:18:55.876516 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 10 00:18:55.876522 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 10 00:18:55.876528 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 10 00:18:55.876535 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 10 00:18:55.876541 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 10 00:18:55.876547 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 10 00:18:55.876554 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 10 00:18:55.876562 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 10 00:18:55.876568 kernel: psci: probing for conduit method from ACPI. May 10 00:18:55.876575 kernel: psci: PSCIv1.1 detected in firmware. May 10 00:18:55.876585 kernel: psci: Using standard PSCI v0.2 function IDs May 10 00:18:55.876592 kernel: psci: Trusted OS migration not required May 10 00:18:55.876599 kernel: psci: SMC Calling Convention v1.1 May 10 00:18:55.876607 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 10 00:18:55.876614 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 10 00:18:55.876621 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 10 00:18:55.876628 kernel: pcpu-alloc: [0] 0 [0] 1 May 10 00:18:55.876635 kernel: Detected PIPT I-cache on CPU0 May 10 00:18:55.876642 kernel: CPU features: detected: GIC system register CPU interface May 10 00:18:55.876649 kernel: CPU features: detected: Hardware dirty bit management May 10 00:18:55.876656 kernel: CPU features: detected: Spectre-v4 May 10 00:18:55.876663 kernel: CPU features: detected: Spectre-BHB May 10 00:18:55.876670 kernel: CPU features: kernel page table isolation forced ON by KASLR May 10 00:18:55.876678 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 10 00:18:55.876685 kernel: CPU features: detected: ARM erratum 1418040 May 10 00:18:55.876692 kernel: CPU features: detected: SSBS not fully self-synchronizing May 10 00:18:55.876699 kernel: alternatives: applying boot alternatives May 10 00:18:55.876707 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 10 00:18:55.876714 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:18:55.876721 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:18:55.876728 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:18:55.876735 kernel: Fallback order for Node 0: 0 May 10 00:18:55.876742 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 10 00:18:55.876749 kernel: Policy zone: Normal May 10 00:18:55.876757 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:18:55.876765 kernel: software IO TLB: area num 2. May 10 00:18:55.876772 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 10 00:18:55.876779 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) May 10 00:18:55.876786 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 10 00:18:55.876793 kernel: rcu: Preemptible hierarchical RCU implementation. May 10 00:18:55.876801 kernel: rcu: RCU event tracing is enabled. May 10 00:18:55.876808 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 10 00:18:55.876815 kernel: Trampoline variant of Tasks RCU enabled. May 10 00:18:55.876822 kernel: Tracing variant of Tasks RCU enabled. May 10 00:18:55.876829 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:18:55.876838 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 10 00:18:55.876845 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 10 00:18:55.876852 kernel: GICv3: 256 SPIs implemented May 10 00:18:55.876859 kernel: GICv3: 0 Extended SPIs implemented May 10 00:18:55.876866 kernel: Root IRQ handler: gic_handle_irq May 10 00:18:55.876873 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 10 00:18:55.876880 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 10 00:18:55.876887 kernel: ITS [mem 0x08080000-0x0809ffff] May 10 00:18:55.876894 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 10 00:18:55.876902 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 10 00:18:55.876908 kernel: GICv3: using LPI property table @0x00000001000e0000 May 10 00:18:55.876916 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 10 00:18:55.879343 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 10 00:18:55.879361 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:18:55.879368 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 10 00:18:55.879375 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 10 00:18:55.879383 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 10 00:18:55.879390 kernel: Console: colour dummy device 80x25 May 10 00:18:55.879397 kernel: ACPI: Core revision 20230628 May 10 00:18:55.879405 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 10 00:18:55.879412 kernel: pid_max: default: 32768 minimum: 301 May 10 00:18:55.879420 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 10 00:18:55.879432 kernel: landlock: Up and running. May 10 00:18:55.879439 kernel: SELinux: Initializing. May 10 00:18:55.879446 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:18:55.879453 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:18:55.879460 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 10 00:18:55.879468 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 00:18:55.879475 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 10 00:18:55.879482 kernel: rcu: Hierarchical SRCU implementation. May 10 00:18:55.879489 kernel: rcu: Max phase no-delay instances is 400. May 10 00:18:55.879498 kernel: Platform MSI: ITS@0x8080000 domain created May 10 00:18:55.879505 kernel: PCI/MSI: ITS@0x8080000 domain created May 10 00:18:55.879512 kernel: Remapping and enabling EFI services. May 10 00:18:55.879519 kernel: smp: Bringing up secondary CPUs ... May 10 00:18:55.879526 kernel: Detected PIPT I-cache on CPU1 May 10 00:18:55.879533 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 10 00:18:55.879540 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 10 00:18:55.879547 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 10 00:18:55.879554 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 10 00:18:55.879562 kernel: smp: Brought up 1 node, 2 CPUs May 10 00:18:55.879569 kernel: SMP: Total of 2 processors activated. May 10 00:18:55.879577 kernel: CPU features: detected: 32-bit EL0 Support May 10 00:18:55.879589 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 10 00:18:55.879598 kernel: CPU features: detected: Common not Private translations May 10 00:18:55.879606 kernel: CPU features: detected: CRC32 instructions May 10 00:18:55.879613 kernel: CPU features: detected: Enhanced Virtualization Traps May 10 00:18:55.879620 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 10 00:18:55.879627 kernel: CPU features: detected: LSE atomic instructions May 10 00:18:55.879635 kernel: CPU features: detected: Privileged Access Never May 10 00:18:55.879642 kernel: CPU features: detected: RAS Extension Support May 10 00:18:55.879651 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 10 00:18:55.879658 kernel: CPU: All CPU(s) started at EL1 May 10 00:18:55.879665 kernel: alternatives: applying system-wide alternatives May 10 00:18:55.879673 kernel: devtmpfs: initialized May 10 00:18:55.879681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:18:55.879688 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 10 00:18:55.879696 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:18:55.879704 kernel: SMBIOS 3.0.0 present. May 10 00:18:55.879711 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 10 00:18:55.879718 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:18:55.879726 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 10 00:18:55.879733 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 10 00:18:55.879741 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 10 00:18:55.879748 kernel: audit: initializing netlink subsys (disabled) May 10 00:18:55.879755 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 May 10 00:18:55.879764 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:18:55.879771 kernel: cpuidle: using governor menu May 10 00:18:55.879778 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 10 00:18:55.879786 kernel: ASID allocator initialised with 32768 entries May 10 00:18:55.879793 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:18:55.879800 kernel: Serial: AMBA PL011 UART driver May 10 00:18:55.879808 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 10 00:18:55.879815 kernel: Modules: 0 pages in range for non-PLT usage May 10 00:18:55.879822 kernel: Modules: 509008 pages in range for PLT usage May 10 00:18:55.879831 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:18:55.879839 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 10 00:18:55.879846 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 10 00:18:55.879853 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 10 00:18:55.879860 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:18:55.879868 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 10 00:18:55.879875 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 10 00:18:55.879882 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 10 00:18:55.879889 kernel: ACPI: Added _OSI(Module Device) May 10 00:18:55.879898 kernel: ACPI: Added _OSI(Processor Device) May 10 00:18:55.879905 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:18:55.879913 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:18:55.879920 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:18:55.879927 kernel: ACPI: Interpreter enabled May 10 00:18:55.879947 kernel: ACPI: Using GIC for interrupt routing May 10 00:18:55.879955 kernel: ACPI: MCFG table detected, 1 entries May 10 00:18:55.879963 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 10 00:18:55.879970 kernel: printk: console [ttyAMA0] enabled May 10 00:18:55.879980 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:18:55.880137 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:18:55.880210 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 00:18:55.880275 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 00:18:55.880357 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 10 00:18:55.880423 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 10 00:18:55.880432 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 10 00:18:55.880443 kernel: PCI host bridge to bus 0000:00 May 10 00:18:55.880515 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 10 00:18:55.880575 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 10 00:18:55.880633 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 10 00:18:55.880690 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:18:55.880769 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 10 00:18:55.880851 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 10 00:18:55.880921 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 10 00:18:55.881006 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 10 00:18:55.881083 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.881150 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 10 00:18:55.881224 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.883372 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 10 00:18:55.883540 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.883617 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 10 00:18:55.883699 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.883768 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 10 00:18:55.883841 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.883907 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 10 00:18:55.884045 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.884118 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 10 00:18:55.884192 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.884258 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 10 00:18:55.885827 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.885917 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 10 00:18:55.886019 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 10 00:18:55.886089 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 10 00:18:55.886163 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 10 00:18:55.886229 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 10 00:18:55.886347 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 10 00:18:55.886424 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 10 00:18:55.886498 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:18:55.886567 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 10 00:18:55.886644 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 10 00:18:55.886711 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 10 00:18:55.886787 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 10 00:18:55.886855 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 10 00:18:55.886922 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 10 00:18:55.887056 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 10 00:18:55.887127 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 10 00:18:55.887203 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 10 00:18:55.887272 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 10 00:18:55.887400 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 10 00:18:55.887484 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 10 00:18:55.887552 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 10 00:18:55.887623 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 10 00:18:55.887697 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 10 00:18:55.887763 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 10 00:18:55.887829 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 10 00:18:55.887894 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 10 00:18:55.887978 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 10 00:18:55.888045 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 10 00:18:55.888110 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 10 00:18:55.888178 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 10 00:18:55.888243 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 10 00:18:55.888325 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 10 00:18:55.888397 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 10 00:18:55.888462 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 10 00:18:55.888531 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 10 00:18:55.888600 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 10 00:18:55.888665 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 10 00:18:55.888729 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 10 00:18:55.888799 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 10 00:18:55.888864 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 10 00:18:55.888929 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 10 00:18:55.889017 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 10 00:18:55.889084 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 10 00:18:55.889149 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 10 00:18:55.889217 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 10 00:18:55.889282 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 10 00:18:55.890154 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 10 00:18:55.890229 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 10 00:18:55.891117 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 10 00:18:55.891218 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 10 00:18:55.891303 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 10 00:18:55.891394 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 10 00:18:55.891462 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 10 00:18:55.891530 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 10 00:18:55.891594 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:18:55.891660 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 10 00:18:55.891728 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:18:55.891797 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 10 00:18:55.891861 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:18:55.891928 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 10 00:18:55.892013 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:18:55.892080 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 10 00:18:55.892145 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:18:55.892215 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 10 00:18:55.892280 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:18:55.892372 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 10 00:18:55.892437 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:18:55.892503 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 10 00:18:55.892568 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:18:55.892638 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 10 00:18:55.892702 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:18:55.892771 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 10 00:18:55.892836 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 10 00:18:55.892902 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 10 00:18:55.892980 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 10 00:18:55.893050 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 10 00:18:55.893114 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 10 00:18:55.893182 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 10 00:18:55.893249 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 10 00:18:55.893342 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 10 00:18:55.893412 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 10 00:18:55.893477 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 10 00:18:55.893542 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 10 00:18:55.893607 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 10 00:18:55.893671 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 10 00:18:55.893739 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 10 00:18:55.893804 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 10 00:18:55.893868 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 10 00:18:55.893962 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 10 00:18:55.894042 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 10 00:18:55.894109 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 10 00:18:55.894179 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 10 00:18:55.894251 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 10 00:18:55.896464 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 10 00:18:55.896550 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 10 00:18:55.896618 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 10 00:18:55.896683 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 10 00:18:55.896747 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 10 00:18:55.896810 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:18:55.896882 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 10 00:18:55.896973 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 10 00:18:55.897041 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 10 00:18:55.897107 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 10 00:18:55.897171 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:18:55.897243 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 10 00:18:55.898495 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 10 00:18:55.898580 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 10 00:18:55.898646 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 10 00:18:55.898710 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 10 00:18:55.898773 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:18:55.898846 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 10 00:18:55.898913 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 10 00:18:55.899005 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 10 00:18:55.899094 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 10 00:18:55.899170 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:18:55.899254 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 10 00:18:55.899334 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 10 00:18:55.899405 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 10 00:18:55.900473 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 10 00:18:55.900613 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 10 00:18:55.900700 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:18:55.900800 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 10 00:18:55.900897 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 10 00:18:55.901005 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 10 00:18:55.901092 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 10 00:18:55.901171 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 10 00:18:55.901249 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:18:55.901353 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 10 00:18:55.901438 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 10 00:18:55.901523 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 10 00:18:55.901602 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 10 00:18:55.901679 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 10 00:18:55.901755 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 10 00:18:55.901832 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:18:55.901915 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 10 00:18:55.902006 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 10 00:18:55.902086 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 10 00:18:55.902167 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:18:55.902247 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 10 00:18:55.904398 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 10 00:18:55.904485 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 10 00:18:55.904553 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:18:55.904621 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 10 00:18:55.904679 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 10 00:18:55.904736 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 10 00:18:55.904814 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 10 00:18:55.904875 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 10 00:18:55.904975 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 10 00:18:55.905055 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 10 00:18:55.905117 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 10 00:18:55.905176 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 10 00:18:55.905247 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 10 00:18:55.905333 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 10 00:18:55.905409 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 10 00:18:55.905478 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 10 00:18:55.905537 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 10 00:18:55.905596 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 10 00:18:55.905662 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 10 00:18:55.905725 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 10 00:18:55.905784 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 10 00:18:55.905851 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 10 00:18:55.905912 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 10 00:18:55.905987 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 10 00:18:55.906056 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 10 00:18:55.906115 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 10 00:18:55.906174 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 10 00:18:55.906241 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 10 00:18:55.907910 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 10 00:18:55.908041 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 10 00:18:55.908114 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 10 00:18:55.908175 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 10 00:18:55.908234 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 10 00:18:55.908244 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 10 00:18:55.908252 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 10 00:18:55.908260 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 10 00:18:55.908268 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 10 00:18:55.908280 kernel: iommu: Default domain type: Translated May 10 00:18:55.908302 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 10 00:18:55.908310 kernel: efivars: Registered efivars operations May 10 00:18:55.908318 kernel: vgaarb: loaded May 10 00:18:55.908326 kernel: clocksource: Switched to clocksource arch_sys_counter May 10 00:18:55.908334 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:18:55.908341 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:18:55.908349 kernel: pnp: PnP ACPI init May 10 00:18:55.908429 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 10 00:18:55.908443 kernel: pnp: PnP ACPI: found 1 devices May 10 00:18:55.908451 kernel: NET: Registered PF_INET protocol family May 10 00:18:55.908459 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:18:55.908467 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 00:18:55.908475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:18:55.908482 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:18:55.908490 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 10 00:18:55.908498 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 00:18:55.908507 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:18:55.908515 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:18:55.908523 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:18:55.908598 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 10 00:18:55.908610 kernel: PCI: CLS 0 bytes, default 64 May 10 00:18:55.908617 kernel: kvm [1]: HYP mode not available May 10 00:18:55.908625 kernel: Initialise system trusted keyrings May 10 00:18:55.908633 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 00:18:55.908640 kernel: Key type asymmetric registered May 10 00:18:55.908650 kernel: Asymmetric key parser 'x509' registered May 10 00:18:55.908657 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 10 00:18:55.908665 kernel: io scheduler mq-deadline registered May 10 00:18:55.908673 kernel: io scheduler kyber registered May 10 00:18:55.908681 kernel: io scheduler bfq registered May 10 00:18:55.908690 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 10 00:18:55.908757 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 10 00:18:55.908824 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 10 00:18:55.908893 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.908979 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 10 00:18:55.909049 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 10 00:18:55.909115 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.909183 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 10 00:18:55.909248 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 10 00:18:55.909411 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.909486 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 10 00:18:55.909551 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 10 00:18:55.909614 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.909679 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 10 00:18:55.909743 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 10 00:18:55.909812 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.909878 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 10 00:18:55.909956 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 10 00:18:55.910025 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.910092 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 10 00:18:55.910156 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 10 00:18:55.910240 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.910316 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 10 00:18:55.910382 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 10 00:18:55.910448 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.910459 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 10 00:18:55.910524 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 10 00:18:55.910589 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 10 00:18:55.910657 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 00:18:55.910668 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 10 00:18:55.910676 kernel: ACPI: button: Power Button [PWRB] May 10 00:18:55.910684 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 10 00:18:55.910754 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 10 00:18:55.910824 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 10 00:18:55.910836 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:18:55.910844 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 10 00:18:55.910914 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 10 00:18:55.910925 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 10 00:18:55.910942 kernel: thunder_xcv, ver 1.0 May 10 00:18:55.910952 kernel: thunder_bgx, ver 1.0 May 10 00:18:55.910960 kernel: nicpf, ver 1.0 May 10 00:18:55.910968 kernel: nicvf, ver 1.0 May 10 00:18:55.911055 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 10 00:18:55.911119 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-10T00:18:55 UTC (1746836335) May 10 00:18:55.911133 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 00:18:55.911141 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 10 00:18:55.911149 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 10 00:18:55.911157 kernel: watchdog: Hard watchdog permanently disabled May 10 00:18:55.911165 kernel: NET: Registered PF_INET6 protocol family May 10 00:18:55.911173 kernel: Segment Routing with IPv6 May 10 00:18:55.911180 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:18:55.911188 kernel: NET: Registered PF_PACKET protocol family May 10 00:18:55.911196 kernel: Key type dns_resolver registered May 10 00:18:55.911206 kernel: registered taskstats version 1 May 10 00:18:55.911213 kernel: Loading compiled-in X.509 certificates May 10 00:18:55.911221 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 10 00:18:55.911229 kernel: Key type .fscrypt registered May 10 00:18:55.911236 kernel: Key type fscrypt-provisioning registered May 10 00:18:55.911244 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:18:55.911252 kernel: ima: Allocated hash algorithm: sha1 May 10 00:18:55.911260 kernel: ima: No architecture policies found May 10 00:18:55.911269 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 10 00:18:55.911277 kernel: clk: Disabling unused clocks May 10 00:18:55.911325 kernel: Freeing unused kernel memory: 39424K May 10 00:18:55.911335 kernel: Run /init as init process May 10 00:18:55.911342 kernel: with arguments: May 10 00:18:55.911350 kernel: /init May 10 00:18:55.911358 kernel: with environment: May 10 00:18:55.911365 kernel: HOME=/ May 10 00:18:55.911373 kernel: TERM=linux May 10 00:18:55.911380 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:18:55.911394 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:18:55.911404 systemd[1]: Detected virtualization kvm. May 10 00:18:55.911413 systemd[1]: Detected architecture arm64. May 10 00:18:55.911421 systemd[1]: Running in initrd. May 10 00:18:55.911429 systemd[1]: No hostname configured, using default hostname. May 10 00:18:55.911436 systemd[1]: Hostname set to . May 10 00:18:55.911445 systemd[1]: Initializing machine ID from VM UUID. May 10 00:18:55.911455 systemd[1]: Queued start job for default target initrd.target. May 10 00:18:55.911463 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:18:55.911472 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:18:55.911480 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 10 00:18:55.911489 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:18:55.911497 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 10 00:18:55.911506 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 10 00:18:55.911517 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 10 00:18:55.911526 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 10 00:18:55.911534 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:18:55.911542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:18:55.911551 systemd[1]: Reached target paths.target - Path Units. May 10 00:18:55.911559 systemd[1]: Reached target slices.target - Slice Units. May 10 00:18:55.911567 systemd[1]: Reached target swap.target - Swaps. May 10 00:18:55.911575 systemd[1]: Reached target timers.target - Timer Units. May 10 00:18:55.911585 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:18:55.911593 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:18:55.911602 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 10 00:18:55.911610 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 10 00:18:55.911619 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:18:55.911627 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:18:55.911636 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:18:55.911644 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:18:55.911652 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 10 00:18:55.911662 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:18:55.911670 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 10 00:18:55.911678 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:18:55.911686 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:18:55.911695 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:18:55.911703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:18:55.911712 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 10 00:18:55.911720 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:18:55.911751 systemd-journald[237]: Collecting audit messages is disabled. May 10 00:18:55.911771 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:18:55.911782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 10 00:18:55.911790 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:18:55.911799 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:18:55.911807 kernel: Bridge firewalling registered May 10 00:18:55.911815 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:18:55.911823 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:18:55.911834 systemd-journald[237]: Journal started May 10 00:18:55.911853 systemd-journald[237]: Runtime Journal (/run/log/journal/62e4af68d84f449f9a50ff4a66e1ba00) is 8.0M, max 76.6M, 68.6M free. May 10 00:18:55.883673 systemd-modules-load[238]: Inserted module 'overlay' May 10 00:18:55.912977 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:18:55.907281 systemd-modules-load[238]: Inserted module 'br_netfilter' May 10 00:18:55.913733 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 10 00:18:55.922527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:18:55.925275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:18:55.927474 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:18:55.936893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:18:55.942670 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:18:55.945150 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 10 00:18:55.948323 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:18:55.956889 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:18:55.963342 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:18:55.968714 dracut-cmdline[270]: dracut-dracut-053 May 10 00:18:55.973924 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 10 00:18:55.984259 systemd-resolved[273]: Positive Trust Anchors: May 10 00:18:55.984281 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:18:55.984844 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:18:55.990361 systemd-resolved[273]: Defaulting to hostname 'linux'. May 10 00:18:55.992002 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:18:55.992688 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:18:56.062333 kernel: SCSI subsystem initialized May 10 00:18:56.067324 kernel: Loading iSCSI transport class v2.0-870. May 10 00:18:56.074329 kernel: iscsi: registered transport (tcp) May 10 00:18:56.088320 kernel: iscsi: registered transport (qla4xxx) May 10 00:18:56.088382 kernel: QLogic iSCSI HBA Driver May 10 00:18:56.125254 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 10 00:18:56.130483 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 10 00:18:56.157590 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:18:56.157655 kernel: device-mapper: uevent: version 1.0.3 May 10 00:18:56.158482 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 10 00:18:56.207366 kernel: raid6: neonx8 gen() 15688 MB/s May 10 00:18:56.224341 kernel: raid6: neonx4 gen() 12596 MB/s May 10 00:18:56.241325 kernel: raid6: neonx2 gen() 13135 MB/s May 10 00:18:56.258366 kernel: raid6: neonx1 gen() 10437 MB/s May 10 00:18:56.275335 kernel: raid6: int64x8 gen() 6889 MB/s May 10 00:18:56.292349 kernel: raid6: int64x4 gen() 7309 MB/s May 10 00:18:56.309333 kernel: raid6: int64x2 gen() 6096 MB/s May 10 00:18:56.326337 kernel: raid6: int64x1 gen() 5002 MB/s May 10 00:18:56.326387 kernel: raid6: using algorithm neonx8 gen() 15688 MB/s May 10 00:18:56.343338 kernel: raid6: .... xor() 11854 MB/s, rmw enabled May 10 00:18:56.343396 kernel: raid6: using neon recovery algorithm May 10 00:18:56.348323 kernel: xor: measuring software checksum speed May 10 00:18:56.348368 kernel: 8regs : 19807 MB/sec May 10 00:18:56.348404 kernel: 32regs : 17999 MB/sec May 10 00:18:56.349334 kernel: arm64_neon : 26936 MB/sec May 10 00:18:56.349373 kernel: xor: using function: arm64_neon (26936 MB/sec) May 10 00:18:56.401379 kernel: Btrfs loaded, zoned=no, fsverity=no May 10 00:18:56.417383 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 10 00:18:56.426493 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:18:56.442409 systemd-udevd[455]: Using default interface naming scheme 'v255'. May 10 00:18:56.445772 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:18:56.454441 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 10 00:18:56.470529 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation May 10 00:18:56.508561 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:18:56.514555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:18:56.569876 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:18:56.579417 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 10 00:18:56.597167 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 10 00:18:56.599175 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:18:56.601198 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:18:56.602354 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:18:56.608542 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 10 00:18:56.628500 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 10 00:18:56.680312 kernel: scsi host0: Virtio SCSI HBA May 10 00:18:56.688330 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 10 00:18:56.688412 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 10 00:18:56.688741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:18:56.688861 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:18:56.689870 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:18:56.692846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:18:56.693011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:18:56.693963 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:18:56.702312 kernel: ACPI: bus type USB registered May 10 00:18:56.704322 kernel: usbcore: registered new interface driver usbfs May 10 00:18:56.706443 kernel: usbcore: registered new interface driver hub May 10 00:18:56.706483 kernel: usbcore: registered new device driver usb May 10 00:18:56.706738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:18:56.722377 kernel: sr 0:0:0:0: Power-on or device reset occurred May 10 00:18:56.724338 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 10 00:18:56.724521 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 00:18:56.728855 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 10 00:18:56.727675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:18:56.739234 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 10 00:18:56.739424 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 10 00:18:56.739513 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 10 00:18:56.740034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 10 00:18:56.742640 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 10 00:18:56.742805 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 10 00:18:56.742889 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 10 00:18:56.744492 kernel: hub 1-0:1.0: USB hub found May 10 00:18:56.747192 kernel: hub 1-0:1.0: 4 ports detected May 10 00:18:56.747482 kernel: sd 0:0:0:1: Power-on or device reset occurred May 10 00:18:56.747600 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 10 00:18:56.747692 kernel: sd 0:0:0:1: [sda] Write Protect is off May 10 00:18:56.747773 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 10 00:18:56.747851 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 10 00:18:56.748627 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 10 00:18:56.751316 kernel: hub 2-0:1.0: USB hub found May 10 00:18:56.751476 kernel: hub 2-0:1.0: 4 ports detected May 10 00:18:56.753618 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:18:56.753656 kernel: GPT:17805311 != 80003071 May 10 00:18:56.753666 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:18:56.753675 kernel: GPT:17805311 != 80003071 May 10 00:18:56.753683 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:18:56.754319 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:18:56.755304 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 10 00:18:56.765173 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:18:56.800318 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (515) May 10 00:18:56.802382 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (508) May 10 00:18:56.806512 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 10 00:18:56.813274 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 10 00:18:56.826836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 10 00:18:56.832627 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 10 00:18:56.835501 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 10 00:18:56.840774 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 10 00:18:56.857596 disk-uuid[578]: Primary Header is updated. May 10 00:18:56.857596 disk-uuid[578]: Secondary Entries is updated. May 10 00:18:56.857596 disk-uuid[578]: Secondary Header is updated. May 10 00:18:56.866350 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:18:56.871320 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:18:56.877330 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:18:56.989525 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 10 00:18:57.125543 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 10 00:18:57.125632 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 10 00:18:57.125994 kernel: usbcore: registered new interface driver usbhid May 10 00:18:57.126025 kernel: usbhid: USB HID core driver May 10 00:18:57.231354 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 10 00:18:57.361322 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 10 00:18:57.416002 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 10 00:18:57.879324 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 10 00:18:57.880239 disk-uuid[579]: The operation has completed successfully. May 10 00:18:57.922796 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:18:57.923548 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 10 00:18:57.937503 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 10 00:18:57.944486 sh[596]: Success May 10 00:18:57.960320 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 10 00:18:58.013079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 10 00:18:58.016581 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 10 00:18:58.018319 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 10 00:18:58.048608 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 10 00:18:58.048734 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 10 00:18:58.049602 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 10 00:18:58.050474 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 10 00:18:58.051303 kernel: BTRFS info (device dm-0): using free space tree May 10 00:18:58.057305 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 10 00:18:58.059002 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 10 00:18:58.060780 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 10 00:18:58.068549 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 10 00:18:58.073186 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 10 00:18:58.085650 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:18:58.085721 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:18:58.085746 kernel: BTRFS info (device sda6): using free space tree May 10 00:18:58.091352 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:18:58.091406 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:18:58.106372 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:18:58.106111 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:18:58.112436 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 10 00:18:58.119508 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 10 00:18:58.218134 ignition[681]: Ignition 2.19.0 May 10 00:18:58.218151 ignition[681]: Stage: fetch-offline May 10 00:18:58.219103 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:18:58.218196 ignition[681]: no configs at "/usr/lib/ignition/base.d" May 10 00:18:58.218205 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:18:58.221534 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:18:58.218405 ignition[681]: parsed url from cmdline: "" May 10 00:18:58.218408 ignition[681]: no config URL provided May 10 00:18:58.218413 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:18:58.218421 ignition[681]: no config at "/usr/lib/ignition/user.ign" May 10 00:18:58.218426 ignition[681]: failed to fetch config: resource requires networking May 10 00:18:58.218609 ignition[681]: Ignition finished successfully May 10 00:18:58.228565 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:18:58.252589 systemd-networkd[785]: lo: Link UP May 10 00:18:58.252599 systemd-networkd[785]: lo: Gained carrier May 10 00:18:58.254559 systemd-networkd[785]: Enumeration completed May 10 00:18:58.255412 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:18:58.256062 systemd[1]: Reached target network.target - Network. May 10 00:18:58.256853 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:18:58.256856 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:18:58.258086 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:18:58.258089 systemd-networkd[785]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:18:58.260465 systemd-networkd[785]: eth0: Link UP May 10 00:18:58.260469 systemd-networkd[785]: eth0: Gained carrier May 10 00:18:58.260478 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:18:58.269578 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 10 00:18:58.269729 systemd-networkd[785]: eth1: Link UP May 10 00:18:58.269733 systemd-networkd[785]: eth1: Gained carrier May 10 00:18:58.269743 systemd-networkd[785]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:18:58.283777 ignition[787]: Ignition 2.19.0 May 10 00:18:58.283789 ignition[787]: Stage: fetch May 10 00:18:58.283981 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 10 00:18:58.283993 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:18:58.284088 ignition[787]: parsed url from cmdline: "" May 10 00:18:58.284092 ignition[787]: no config URL provided May 10 00:18:58.284096 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:18:58.284103 ignition[787]: no config at "/usr/lib/ignition/user.ign" May 10 00:18:58.284122 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 10 00:18:58.284785 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 10 00:18:58.302388 systemd-networkd[785]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:18:58.339381 systemd-networkd[785]: eth0: DHCPv4 address 91.107.204.139/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 10 00:18:58.485705 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 10 00:18:58.492470 ignition[787]: GET result: OK May 10 00:18:58.492620 ignition[787]: parsing config with SHA512: 0e1a03b48ae53adcdcbbbc18a2b03f7cb93db50e479b75af362dbbc8680cb8c2f6ea16487aabe3a0b12994ee951f0313bd4d1f3aa6590adeeab20aa985359218 May 10 00:18:58.498753 unknown[787]: fetched base config from "system" May 10 00:18:58.498762 unknown[787]: fetched base config from "system" May 10 00:18:58.499144 ignition[787]: fetch: fetch complete May 10 00:18:58.498767 unknown[787]: fetched user config from "hetzner" May 10 00:18:58.499148 ignition[787]: fetch: fetch passed May 10 00:18:58.501856 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 10 00:18:58.499191 ignition[787]: Ignition finished successfully May 10 00:18:58.508536 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 10 00:18:58.522229 ignition[795]: Ignition 2.19.0 May 10 00:18:58.522241 ignition[795]: Stage: kargs May 10 00:18:58.522445 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 10 00:18:58.522456 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:18:58.523450 ignition[795]: kargs: kargs passed May 10 00:18:58.525150 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 10 00:18:58.523508 ignition[795]: Ignition finished successfully May 10 00:18:58.531475 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 10 00:18:58.543917 ignition[801]: Ignition 2.19.0 May 10 00:18:58.543939 ignition[801]: Stage: disks May 10 00:18:58.544120 ignition[801]: no configs at "/usr/lib/ignition/base.d" May 10 00:18:58.544130 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:18:58.545065 ignition[801]: disks: disks passed May 10 00:18:58.547170 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 10 00:18:58.545118 ignition[801]: Ignition finished successfully May 10 00:18:58.548211 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 10 00:18:58.548879 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 10 00:18:58.549948 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:18:58.550803 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:18:58.551758 systemd[1]: Reached target basic.target - Basic System. May 10 00:18:58.557581 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 10 00:18:58.574386 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 10 00:18:58.578452 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 10 00:18:58.583508 systemd[1]: Mounting sysroot.mount - /sysroot... May 10 00:18:58.635310 kernel: EXT4-fs (sda9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 10 00:18:58.635632 systemd[1]: Mounted sysroot.mount - /sysroot. May 10 00:18:58.637340 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 10 00:18:58.643415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:18:58.647036 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 10 00:18:58.650511 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 10 00:18:58.652550 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:18:58.652583 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:18:58.657607 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 10 00:18:58.662722 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (817) May 10 00:18:58.662772 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:18:58.662785 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:18:58.662796 kernel: BTRFS info (device sda6): using free space tree May 10 00:18:58.665059 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 10 00:18:58.670584 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:18:58.670628 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:18:58.676611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:18:58.727215 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:18:58.730378 coreos-metadata[819]: May 10 00:18:58.730 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 10 00:18:58.734143 coreos-metadata[819]: May 10 00:18:58.733 INFO Fetch successful May 10 00:18:58.735334 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory May 10 00:18:58.736140 coreos-metadata[819]: May 10 00:18:58.735 INFO wrote hostname ci-4081-3-3-n-2389c948d4 to /sysroot/etc/hostname May 10 00:18:58.741322 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 10 00:18:58.744777 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:18:58.750518 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:18:58.844656 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 10 00:18:58.852460 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 10 00:18:58.856470 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 10 00:18:58.863333 kernel: BTRFS info (device sda6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:18:58.886986 ignition[933]: INFO : Ignition 2.19.0 May 10 00:18:58.886986 ignition[933]: INFO : Stage: mount May 10 00:18:58.886986 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:18:58.886986 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:18:58.890451 ignition[933]: INFO : mount: mount passed May 10 00:18:58.890451 ignition[933]: INFO : Ignition finished successfully May 10 00:18:58.891972 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 10 00:18:58.900484 systemd[1]: Starting ignition-files.service - Ignition (files)... May 10 00:18:58.903853 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 10 00:18:59.049007 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 10 00:18:59.054614 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 10 00:18:59.064341 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (945) May 10 00:18:59.065059 kernel: BTRFS info (device sda6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 10 00:18:59.066303 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 10 00:18:59.066345 kernel: BTRFS info (device sda6): using free space tree May 10 00:18:59.069319 kernel: BTRFS info (device sda6): enabling ssd optimizations May 10 00:18:59.069364 kernel: BTRFS info (device sda6): auto enabling async discard May 10 00:18:59.073340 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 10 00:18:59.098726 ignition[962]: INFO : Ignition 2.19.0 May 10 00:18:59.098726 ignition[962]: INFO : Stage: files May 10 00:18:59.099725 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:18:59.099725 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:18:59.101308 ignition[962]: DEBUG : files: compiled without relabeling support, skipping May 10 00:18:59.101308 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:18:59.101308 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:18:59.104205 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:18:59.105036 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:18:59.106220 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:18:59.105420 unknown[962]: wrote ssh authorized keys file for user: core May 10 00:18:59.107726 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:18:59.107726 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 10 00:18:59.226714 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 00:18:59.394544 systemd-networkd[785]: eth0: Gained IPv6LL May 10 00:19:00.098641 systemd-networkd[785]: eth1: Gained IPv6LL May 10 00:19:00.203242 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 10 00:19:00.203242 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:19:00.206803 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 10 00:19:00.807514 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 10 00:19:01.018075 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 10 00:19:01.018075 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 10 00:19:01.022394 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:19:01.022394 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:19:01.022394 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 10 00:19:01.022394 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 10 00:19:01.022394 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 10 00:19:01.022394 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 10 00:19:01.022394 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 10 00:19:01.022394 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 10 00:19:01.022394 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:19:01.022394 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:19:01.022394 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:19:01.022394 ignition[962]: INFO : files: files passed May 10 00:19:01.022394 ignition[962]: INFO : Ignition finished successfully May 10 00:19:01.023797 systemd[1]: Finished ignition-files.service - Ignition (files). May 10 00:19:01.029489 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 10 00:19:01.033385 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 10 00:19:01.039931 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:19:01.040043 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 10 00:19:01.049867 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:19:01.049867 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 10 00:19:01.052786 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:19:01.055460 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:19:01.056372 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 10 00:19:01.064887 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 10 00:19:01.098757 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:19:01.098985 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 10 00:19:01.102160 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 10 00:19:01.102945 systemd[1]: Reached target initrd.target - Initrd Default Target. May 10 00:19:01.103968 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 10 00:19:01.108548 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 10 00:19:01.125767 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:19:01.131522 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 10 00:19:01.149203 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 10 00:19:01.150045 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:19:01.151200 systemd[1]: Stopped target timers.target - Timer Units. May 10 00:19:01.152255 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:19:01.152402 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 10 00:19:01.153816 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 10 00:19:01.154749 systemd[1]: Stopped target basic.target - Basic System. May 10 00:19:01.155858 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 10 00:19:01.156903 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 10 00:19:01.157995 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 10 00:19:01.159058 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 10 00:19:01.160150 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 10 00:19:01.161326 systemd[1]: Stopped target sysinit.target - System Initialization. May 10 00:19:01.162336 systemd[1]: Stopped target local-fs.target - Local File Systems. May 10 00:19:01.163458 systemd[1]: Stopped target swap.target - Swaps. May 10 00:19:01.164337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:19:01.164466 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 10 00:19:01.165761 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 10 00:19:01.166445 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:19:01.167487 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 10 00:19:01.168591 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:19:01.169303 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:19:01.169428 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 10 00:19:01.170969 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:19:01.171096 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 10 00:19:01.172426 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:19:01.172521 systemd[1]: Stopped ignition-files.service - Ignition (files). May 10 00:19:01.173429 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 10 00:19:01.173525 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 10 00:19:01.183599 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 10 00:19:01.184591 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:19:01.184789 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:19:01.190536 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 10 00:19:01.191458 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:19:01.191587 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:19:01.192603 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:19:01.192757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 10 00:19:01.202578 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:19:01.202679 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 10 00:19:01.210868 ignition[1014]: INFO : Ignition 2.19.0 May 10 00:19:01.210868 ignition[1014]: INFO : Stage: umount May 10 00:19:01.210868 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:19:01.210868 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 10 00:19:01.214031 ignition[1014]: INFO : umount: umount passed May 10 00:19:01.214031 ignition[1014]: INFO : Ignition finished successfully May 10 00:19:01.212632 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:19:01.212735 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 10 00:19:01.215930 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:19:01.216020 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 10 00:19:01.219407 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:19:01.219483 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 10 00:19:01.223959 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 00:19:01.224020 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 10 00:19:01.225224 systemd[1]: Stopped target network.target - Network. May 10 00:19:01.233597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:19:01.233687 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 10 00:19:01.235164 systemd[1]: Stopped target paths.target - Path Units. May 10 00:19:01.239377 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:19:01.243360 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:19:01.244860 systemd[1]: Stopped target slices.target - Slice Units. May 10 00:19:01.247747 systemd[1]: Stopped target sockets.target - Socket Units. May 10 00:19:01.251430 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:19:01.252315 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 10 00:19:01.252958 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:19:01.253001 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 10 00:19:01.255204 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:19:01.255269 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 10 00:19:01.255872 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 10 00:19:01.255926 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 10 00:19:01.257970 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 10 00:19:01.259794 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 10 00:19:01.262819 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:19:01.263567 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:19:01.263662 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 10 00:19:01.264899 systemd-networkd[785]: eth1: DHCPv6 lease lost May 10 00:19:01.266218 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:19:01.266357 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 10 00:19:01.268556 systemd-networkd[785]: eth0: DHCPv6 lease lost May 10 00:19:01.270237 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:19:01.270857 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 10 00:19:01.271749 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 10 00:19:01.271802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:19:01.273618 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:19:01.275419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 10 00:19:01.276558 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:19:01.276595 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 10 00:19:01.283518 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 10 00:19:01.284035 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:19:01.284106 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 10 00:19:01.286765 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:19:01.286823 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 10 00:19:01.288370 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:19:01.288427 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 10 00:19:01.289130 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:19:01.307869 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:19:01.308084 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 10 00:19:01.310601 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:19:01.310867 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:19:01.312267 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:19:01.312340 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 10 00:19:01.313061 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:19:01.313092 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:19:01.314008 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:19:01.314056 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 10 00:19:01.315532 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:19:01.315578 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 10 00:19:01.316934 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:19:01.316980 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 10 00:19:01.325528 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 10 00:19:01.326113 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 00:19:01.326168 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:19:01.329935 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:19:01.330016 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:19:01.341243 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:19:01.341406 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 10 00:19:01.343188 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 10 00:19:01.352569 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 10 00:19:01.362944 systemd[1]: Switching root. May 10 00:19:01.402063 systemd-journald[237]: Journal stopped May 10 00:19:02.268402 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 10 00:19:02.268484 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:19:02.268501 kernel: SELinux: policy capability open_perms=1 May 10 00:19:02.268511 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:19:02.268521 kernel: SELinux: policy capability always_check_network=0 May 10 00:19:02.268535 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:19:02.268545 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:19:02.268555 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:19:02.268568 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:19:02.268577 kernel: audit: type=1403 audit(1746836341.522:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:19:02.268588 systemd[1]: Successfully loaded SELinux policy in 33.938ms. May 10 00:19:02.268605 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.190ms. May 10 00:19:02.268617 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 10 00:19:02.268628 systemd[1]: Detected virtualization kvm. May 10 00:19:02.268638 systemd[1]: Detected architecture arm64. May 10 00:19:02.268649 systemd[1]: Detected first boot. May 10 00:19:02.268661 systemd[1]: Hostname set to . May 10 00:19:02.268672 systemd[1]: Initializing machine ID from VM UUID. May 10 00:19:02.268682 zram_generator::config[1057]: No configuration found. May 10 00:19:02.268694 systemd[1]: Populated /etc with preset unit settings. May 10 00:19:02.268704 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:19:02.268714 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 10 00:19:02.268724 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:19:02.268735 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 10 00:19:02.268750 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 10 00:19:02.268763 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 10 00:19:02.268773 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 10 00:19:02.268783 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 10 00:19:02.268794 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 10 00:19:02.268805 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 10 00:19:02.268815 systemd[1]: Created slice user.slice - User and Session Slice. May 10 00:19:02.268825 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 10 00:19:02.268836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 10 00:19:02.268847 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 10 00:19:02.268858 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 10 00:19:02.268870 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 10 00:19:02.268880 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 10 00:19:02.268892 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 10 00:19:02.268937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 10 00:19:02.268951 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 10 00:19:02.268962 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 10 00:19:02.268975 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 10 00:19:02.268986 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 10 00:19:02.268996 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 10 00:19:02.269007 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 10 00:19:02.269017 systemd[1]: Reached target slices.target - Slice Units. May 10 00:19:02.269027 systemd[1]: Reached target swap.target - Swaps. May 10 00:19:02.269038 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 10 00:19:02.269049 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 10 00:19:02.269061 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 10 00:19:02.269072 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 10 00:19:02.269082 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 10 00:19:02.269094 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 10 00:19:02.269105 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 10 00:19:02.269115 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 10 00:19:02.269125 systemd[1]: Mounting media.mount - External Media Directory... May 10 00:19:02.269136 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 10 00:19:02.269146 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 10 00:19:02.269158 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 10 00:19:02.269170 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:19:02.269180 systemd[1]: Reached target machines.target - Containers. May 10 00:19:02.269194 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 10 00:19:02.269207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:19:02.269220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 10 00:19:02.269231 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 10 00:19:02.269241 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:19:02.269252 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:19:02.269262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:19:02.269273 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 10 00:19:02.269283 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:19:02.269314 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:19:02.269328 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:19:02.269339 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 10 00:19:02.269349 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:19:02.269359 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:19:02.269370 systemd[1]: Starting systemd-journald.service - Journal Service... May 10 00:19:02.269380 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 10 00:19:02.269391 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 10 00:19:02.269401 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 10 00:19:02.269412 kernel: ACPI: bus type drm_connector registered May 10 00:19:02.269424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 10 00:19:02.269435 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:19:02.269446 systemd[1]: Stopped verity-setup.service. May 10 00:19:02.269457 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 10 00:19:02.269468 kernel: loop: module loaded May 10 00:19:02.269478 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 10 00:19:02.269489 systemd[1]: Mounted media.mount - External Media Directory. May 10 00:19:02.269501 kernel: fuse: init (API version 7.39) May 10 00:19:02.269511 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 10 00:19:02.269522 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 10 00:19:02.269532 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 10 00:19:02.269543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 10 00:19:02.269554 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:19:02.269565 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 10 00:19:02.269578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:19:02.269588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:19:02.269599 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:19:02.269610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:19:02.269620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:19:02.269631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:19:02.269644 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:19:02.269655 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 10 00:19:02.269689 systemd-journald[1124]: Collecting audit messages is disabled. May 10 00:19:02.269711 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:19:02.269722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:19:02.269734 systemd-journald[1124]: Journal started May 10 00:19:02.269758 systemd-journald[1124]: Runtime Journal (/run/log/journal/62e4af68d84f449f9a50ff4a66e1ba00) is 8.0M, max 76.6M, 68.6M free. May 10 00:19:02.020058 systemd[1]: Queued start job for default target multi-user.target. May 10 00:19:02.039040 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 10 00:19:02.039590 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:19:02.271461 systemd[1]: Started systemd-journald.service - Journal Service. May 10 00:19:02.273042 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 10 00:19:02.275061 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 10 00:19:02.277251 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 10 00:19:02.295062 systemd[1]: Reached target network-pre.target - Preparation for Network. May 10 00:19:02.301480 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 10 00:19:02.310417 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 10 00:19:02.312659 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:19:02.312705 systemd[1]: Reached target local-fs.target - Local File Systems. May 10 00:19:02.315252 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 10 00:19:02.324559 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 10 00:19:02.328041 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 10 00:19:02.328774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:19:02.335497 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 10 00:19:02.339720 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 10 00:19:02.341175 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:19:02.346532 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 10 00:19:02.347378 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:19:02.357531 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 10 00:19:02.363333 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 10 00:19:02.370369 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 10 00:19:02.371341 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 10 00:19:02.372091 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 10 00:19:02.373009 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 10 00:19:02.387528 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 10 00:19:02.402544 systemd-journald[1124]: Time spent on flushing to /var/log/journal/62e4af68d84f449f9a50ff4a66e1ba00 is 103.365ms for 1128 entries. May 10 00:19:02.402544 systemd-journald[1124]: System Journal (/var/log/journal/62e4af68d84f449f9a50ff4a66e1ba00) is 8.0M, max 584.8M, 576.8M free. May 10 00:19:02.522790 systemd-journald[1124]: Received client request to flush runtime journal. May 10 00:19:02.522852 kernel: loop0: detected capacity change from 0 to 194096 May 10 00:19:02.522867 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:19:02.522881 kernel: loop1: detected capacity change from 0 to 114328 May 10 00:19:02.409791 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 10 00:19:02.411681 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 10 00:19:02.414485 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 10 00:19:02.423727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 10 00:19:02.434503 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 10 00:19:02.453677 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 10 00:19:02.490823 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:19:02.503879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:19:02.509611 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 10 00:19:02.513223 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 10 00:19:02.523531 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 10 00:19:02.535186 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 10 00:19:02.566324 kernel: loop2: detected capacity change from 0 to 114432 May 10 00:19:02.575069 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 10 00:19:02.575171 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. May 10 00:19:02.590330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 10 00:19:02.605332 kernel: loop3: detected capacity change from 0 to 8 May 10 00:19:02.629446 kernel: loop4: detected capacity change from 0 to 194096 May 10 00:19:02.644318 kernel: loop5: detected capacity change from 0 to 114328 May 10 00:19:02.659327 kernel: loop6: detected capacity change from 0 to 114432 May 10 00:19:02.680366 kernel: loop7: detected capacity change from 0 to 8 May 10 00:19:02.681726 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 10 00:19:02.682554 (sd-merge)[1198]: Merged extensions into '/usr'. May 10 00:19:02.690045 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... May 10 00:19:02.690065 systemd[1]: Reloading... May 10 00:19:02.828105 zram_generator::config[1225]: No configuration found. May 10 00:19:02.905788 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:19:02.941053 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:19:02.988491 systemd[1]: Reloading finished in 297 ms. May 10 00:19:03.015753 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 10 00:19:03.018368 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 10 00:19:03.031612 systemd[1]: Starting ensure-sysext.service... May 10 00:19:03.035470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 10 00:19:03.055397 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... May 10 00:19:03.055452 systemd[1]: Reloading... May 10 00:19:03.083859 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:19:03.084725 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 10 00:19:03.086212 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:19:03.086462 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 10 00:19:03.086511 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 10 00:19:03.089205 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:19:03.089220 systemd-tmpfiles[1263]: Skipping /boot May 10 00:19:03.101348 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 10 00:19:03.101364 systemd-tmpfiles[1263]: Skipping /boot May 10 00:19:03.139615 zram_generator::config[1289]: No configuration found. May 10 00:19:03.241435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:19:03.291840 systemd[1]: Reloading finished in 235 ms. May 10 00:19:03.312164 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 10 00:19:03.317950 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 10 00:19:03.327118 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:19:03.331514 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 10 00:19:03.335738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 10 00:19:03.339721 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 10 00:19:03.343630 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 10 00:19:03.348513 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 10 00:19:03.353649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:19:03.358601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:19:03.365855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:19:03.369588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:19:03.370400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:19:03.373749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:19:03.373942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:19:03.377929 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:19:03.385597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 10 00:19:03.386275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:19:03.392693 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 10 00:19:03.396343 systemd[1]: Finished ensure-sysext.service. May 10 00:19:03.407884 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 10 00:19:03.409658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:19:03.409802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:19:03.424995 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:19:03.427376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:19:03.428550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:19:03.429403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:19:03.434306 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:19:03.434796 systemd-udevd[1334]: Using default interface naming scheme 'v255'. May 10 00:19:03.435588 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 10 00:19:03.442162 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 10 00:19:03.446505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:19:03.446649 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:19:03.454784 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 10 00:19:03.456507 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 10 00:19:03.478155 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 10 00:19:03.501496 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 10 00:19:03.503712 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 10 00:19:03.505505 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 10 00:19:03.507347 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 10 00:19:03.510403 augenrules[1374]: No rules May 10 00:19:03.512954 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:19:03.523271 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:19:03.604548 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 10 00:19:03.619992 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 10 00:19:03.621495 systemd[1]: Reached target time-set.target - System Time Set. May 10 00:19:03.623328 systemd-resolved[1332]: Positive Trust Anchors: May 10 00:19:03.623348 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:19:03.623381 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 10 00:19:03.631263 systemd-resolved[1332]: Using system hostname 'ci-4081-3-3-n-2389c948d4'. May 10 00:19:03.634371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 10 00:19:03.635219 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 10 00:19:03.643197 systemd-networkd[1367]: lo: Link UP May 10 00:19:03.643205 systemd-networkd[1367]: lo: Gained carrier May 10 00:19:03.643949 systemd-networkd[1367]: Enumeration completed May 10 00:19:03.644421 systemd[1]: Started systemd-networkd.service - Network Configuration. May 10 00:19:03.645113 systemd[1]: Reached target network.target - Network. May 10 00:19:03.674689 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 10 00:19:03.744636 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 10 00:19:03.744997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 10 00:19:03.746837 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:19:03.746847 systemd-networkd[1367]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:19:03.750489 systemd-networkd[1367]: eth1: Link UP May 10 00:19:03.750627 systemd-networkd[1367]: eth1: Gained carrier May 10 00:19:03.750653 systemd-networkd[1367]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:19:03.751575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 10 00:19:03.763063 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 10 00:19:03.767456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 10 00:19:03.768684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 10 00:19:03.768831 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:19:03.777333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:19:03.777497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 10 00:19:03.782393 systemd-networkd[1367]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:19:03.783615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:19:03.785520 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 10 00:19:03.786598 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:19:03.786732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 10 00:19:03.788048 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. May 10 00:19:03.791000 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:19:03.792971 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 10 00:19:03.806328 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1385) May 10 00:19:03.810546 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:19:03.810809 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:19:03.811232 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. May 10 00:19:03.812338 systemd-networkd[1367]: eth0: Link UP May 10 00:19:03.812348 systemd-networkd[1367]: eth0: Gained carrier May 10 00:19:03.812369 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 10 00:19:03.817798 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. May 10 00:19:03.822349 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:19:03.838883 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 10 00:19:03.838982 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 10 00:19:03.838995 kernel: [drm] features: -context_init May 10 00:19:03.842936 kernel: [drm] number of scanouts: 1 May 10 00:19:03.843056 kernel: [drm] number of cap sets: 0 May 10 00:19:03.863313 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 10 00:19:03.870447 systemd-networkd[1367]: eth0: DHCPv4 address 91.107.204.139/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 10 00:19:03.870995 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. May 10 00:19:03.871676 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. May 10 00:19:03.884038 kernel: Console: switching to colour frame buffer device 160x50 May 10 00:19:03.894310 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 10 00:19:03.909760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:19:03.912535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:19:03.912947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:19:03.916076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 10 00:19:03.926742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 10 00:19:03.932667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 10 00:19:03.944616 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 10 00:19:04.005537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 10 00:19:04.046464 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 10 00:19:04.052670 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 10 00:19:04.079345 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:19:04.104849 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 10 00:19:04.107176 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 10 00:19:04.108463 systemd[1]: Reached target sysinit.target - System Initialization. May 10 00:19:04.109148 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 10 00:19:04.109878 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 10 00:19:04.110780 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 10 00:19:04.111687 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 10 00:19:04.112391 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 10 00:19:04.113098 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:19:04.113133 systemd[1]: Reached target paths.target - Path Units. May 10 00:19:04.113633 systemd[1]: Reached target timers.target - Timer Units. May 10 00:19:04.115937 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 10 00:19:04.119641 systemd[1]: Starting docker.socket - Docker Socket for the API... May 10 00:19:04.126586 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 10 00:19:04.129011 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 10 00:19:04.130541 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 10 00:19:04.131430 systemd[1]: Reached target sockets.target - Socket Units. May 10 00:19:04.132095 systemd[1]: Reached target basic.target - Basic System. May 10 00:19:04.132834 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 10 00:19:04.132869 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 10 00:19:04.143242 systemd[1]: Starting containerd.service - containerd container runtime... May 10 00:19:04.147609 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 10 00:19:04.149807 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:19:04.151651 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 10 00:19:04.157012 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 10 00:19:04.161484 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 10 00:19:04.162099 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 10 00:19:04.163350 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 10 00:19:04.166206 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 10 00:19:04.168653 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 10 00:19:04.170829 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 10 00:19:04.176182 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 10 00:19:04.182462 systemd[1]: Starting systemd-logind.service - User Login Management... May 10 00:19:04.184054 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:19:04.185710 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:19:04.187545 systemd[1]: Starting update-engine.service - Update Engine... May 10 00:19:04.193961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 10 00:19:04.224665 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 10 00:19:04.228706 jq[1449]: false May 10 00:19:04.236369 coreos-metadata[1447]: May 10 00:19:04.235 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 10 00:19:04.238069 coreos-metadata[1447]: May 10 00:19:04.237 INFO Fetch successful May 10 00:19:04.238069 coreos-metadata[1447]: May 10 00:19:04.237 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 10 00:19:04.238778 coreos-metadata[1447]: May 10 00:19:04.238 INFO Fetch successful May 10 00:19:04.241566 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:19:04.241811 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 10 00:19:04.246765 dbus-daemon[1448]: [system] SELinux support is enabled May 10 00:19:04.247113 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 10 00:19:04.253620 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:19:04.254022 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 10 00:19:04.266730 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:19:04.266773 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 10 00:19:04.267016 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 10 00:19:04.269338 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:19:04.269367 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 10 00:19:04.272899 jq[1460]: true May 10 00:19:04.300122 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:19:04.300323 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 10 00:19:04.320181 jq[1483]: true May 10 00:19:04.323048 tar[1464]: linux-arm64/helm May 10 00:19:04.332351 update_engine[1459]: I20250510 00:19:04.332139 1459 main.cc:92] Flatcar Update Engine starting May 10 00:19:04.334220 update_engine[1459]: I20250510 00:19:04.334178 1459 update_check_scheduler.cc:74] Next update check in 10m56s May 10 00:19:04.334478 systemd[1]: Started update-engine.service - Update Engine. May 10 00:19:04.335272 extend-filesystems[1451]: Found loop4 May 10 00:19:04.336810 extend-filesystems[1451]: Found loop5 May 10 00:19:04.336810 extend-filesystems[1451]: Found loop6 May 10 00:19:04.336810 extend-filesystems[1451]: Found loop7 May 10 00:19:04.336810 extend-filesystems[1451]: Found sda May 10 00:19:04.336810 extend-filesystems[1451]: Found sda1 May 10 00:19:04.336810 extend-filesystems[1451]: Found sda2 May 10 00:19:04.336810 extend-filesystems[1451]: Found sda3 May 10 00:19:04.336810 extend-filesystems[1451]: Found usr May 10 00:19:04.336810 extend-filesystems[1451]: Found sda4 May 10 00:19:04.336810 extend-filesystems[1451]: Found sda6 May 10 00:19:04.336810 extend-filesystems[1451]: Found sda7 May 10 00:19:04.336810 extend-filesystems[1451]: Found sda9 May 10 00:19:04.336810 extend-filesystems[1451]: Checking size of /dev/sda9 May 10 00:19:04.339197 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 10 00:19:04.368554 extend-filesystems[1451]: Resized partition /dev/sda9 May 10 00:19:04.371635 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) May 10 00:19:04.379398 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 10 00:19:04.382222 systemd-logind[1458]: New seat seat0. May 10 00:19:04.393617 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) May 10 00:19:04.393645 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 10 00:19:04.393936 systemd[1]: Started systemd-logind.service - User Login Management. May 10 00:19:04.438940 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 10 00:19:04.440662 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 10 00:19:04.492311 bash[1521]: Updated "/home/core/.ssh/authorized_keys" May 10 00:19:04.495614 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 10 00:19:04.519821 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1365) May 10 00:19:04.522192 systemd[1]: Starting sshkeys.service... May 10 00:19:04.532500 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 10 00:19:04.548663 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 10 00:19:04.552650 extend-filesystems[1498]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 10 00:19:04.552650 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 5 May 10 00:19:04.552650 extend-filesystems[1498]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 10 00:19:04.556945 extend-filesystems[1451]: Resized filesystem in /dev/sda9 May 10 00:19:04.556945 extend-filesystems[1451]: Found sr0 May 10 00:19:04.562644 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 10 00:19:04.569073 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:19:04.571140 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 10 00:19:04.648138 coreos-metadata[1526]: May 10 00:19:04.646 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 10 00:19:04.648138 coreos-metadata[1526]: May 10 00:19:04.647 INFO Fetch successful May 10 00:19:04.650464 unknown[1526]: wrote ssh authorized keys file for user: core May 10 00:19:04.680667 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" May 10 00:19:04.681635 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 10 00:19:04.687823 systemd[1]: Finished sshkeys.service. May 10 00:19:04.725304 containerd[1476]: time="2025-05-10T00:19:04.723695760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 10 00:19:04.782972 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:19:04.805971 containerd[1476]: time="2025-05-10T00:19:04.805700960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:19:04.810621 containerd[1476]: time="2025-05-10T00:19:04.810569080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:19:04.811257 containerd[1476]: time="2025-05-10T00:19:04.811233320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.811611680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.811797320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.811821760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.811929200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.811945840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.812130920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.812155680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.812168520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.812178600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:19:04.812740 containerd[1476]: time="2025-05-10T00:19:04.812255320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:19:04.814082 containerd[1476]: time="2025-05-10T00:19:04.814052280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:19:04.815067 containerd[1476]: time="2025-05-10T00:19:04.815040040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:19:04.815424 containerd[1476]: time="2025-05-10T00:19:04.815407400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:19:04.815610 containerd[1476]: time="2025-05-10T00:19:04.815591320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:19:04.816049 containerd[1476]: time="2025-05-10T00:19:04.816029640Z" level=info msg="metadata content store policy set" policy=shared May 10 00:19:04.822586 containerd[1476]: time="2025-05-10T00:19:04.822558400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:19:04.822851 containerd[1476]: time="2025-05-10T00:19:04.822835440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:19:04.822990 containerd[1476]: time="2025-05-10T00:19:04.822975080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 10 00:19:04.823537 containerd[1476]: time="2025-05-10T00:19:04.823079920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 10 00:19:04.823537 containerd[1476]: time="2025-05-10T00:19:04.823100960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:19:04.823537 containerd[1476]: time="2025-05-10T00:19:04.823251400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824192480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824353720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824371440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824385480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824399160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824413960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824429280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824442480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824458040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824470560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824489880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824502440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824522480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825175 containerd[1476]: time="2025-05-10T00:19:04.824540680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824554280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824567360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824579520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824597960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824611080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824625760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824641280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824661760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824679600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824694400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824706680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824722320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824746680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824758920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825561 containerd[1476]: time="2025-05-10T00:19:04.824770000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:19:04.825822 containerd[1476]: time="2025-05-10T00:19:04.824904240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:19:04.825822 containerd[1476]: time="2025-05-10T00:19:04.824926080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 10 00:19:04.825822 containerd[1476]: time="2025-05-10T00:19:04.824938000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:19:04.825822 containerd[1476]: time="2025-05-10T00:19:04.824949360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 10 00:19:04.825822 containerd[1476]: time="2025-05-10T00:19:04.824959080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825822 containerd[1476]: time="2025-05-10T00:19:04.824972840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 10 00:19:04.825822 containerd[1476]: time="2025-05-10T00:19:04.824984200Z" level=info msg="NRI interface is disabled by configuration." May 10 00:19:04.825822 containerd[1476]: time="2025-05-10T00:19:04.825000720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:19:04.825975 containerd[1476]: time="2025-05-10T00:19:04.825780040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:19:04.825975 containerd[1476]: time="2025-05-10T00:19:04.825856280Z" level=info msg="Connect containerd service" May 10 00:19:04.825975 containerd[1476]: time="2025-05-10T00:19:04.825921160Z" level=info msg="using legacy CRI server" May 10 00:19:04.825975 containerd[1476]: time="2025-05-10T00:19:04.825931520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 10 00:19:04.827318 containerd[1476]: time="2025-05-10T00:19:04.826169360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:19:04.831502 containerd[1476]: time="2025-05-10T00:19:04.831304360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:19:04.832191 containerd[1476]: time="2025-05-10T00:19:04.832111600Z" level=info msg="Start subscribing containerd event" May 10 00:19:04.832191 containerd[1476]: time="2025-05-10T00:19:04.832172120Z" level=info msg="Start recovering state" May 10 00:19:04.832268 containerd[1476]: time="2025-05-10T00:19:04.832244520Z" level=info msg="Start event monitor" May 10 00:19:04.832268 containerd[1476]: time="2025-05-10T00:19:04.832258640Z" level=info msg="Start snapshots syncer" May 10 00:19:04.832330 containerd[1476]: time="2025-05-10T00:19:04.832268720Z" level=info msg="Start cni network conf syncer for default" May 10 00:19:04.832330 containerd[1476]: time="2025-05-10T00:19:04.832276040Z" level=info msg="Start streaming server" May 10 00:19:04.834684 containerd[1476]: time="2025-05-10T00:19:04.832816840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:19:04.834684 containerd[1476]: time="2025-05-10T00:19:04.832866080Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:19:04.834684 containerd[1476]: time="2025-05-10T00:19:04.832937800Z" level=info msg="containerd successfully booted in 0.113270s" May 10 00:19:04.833043 systemd[1]: Started containerd.service - containerd container runtime. May 10 00:19:05.002716 tar[1464]: linux-arm64/LICENSE May 10 00:19:05.002965 tar[1464]: linux-arm64/README.md May 10 00:19:05.018275 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 10 00:19:05.154491 systemd-networkd[1367]: eth1: Gained IPv6LL May 10 00:19:05.157506 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. May 10 00:19:05.162658 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 10 00:19:05.164260 systemd[1]: Reached target network-online.target - Network is Online. May 10 00:19:05.173619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:19:05.181390 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 10 00:19:05.206762 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 10 00:19:05.282421 systemd-networkd[1367]: eth0: Gained IPv6LL May 10 00:19:05.284184 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. May 10 00:19:05.874473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:19:05.880601 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:19:06.124316 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:19:06.146145 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 10 00:19:06.154627 systemd[1]: Starting issuegen.service - Generate /run/issue... May 10 00:19:06.162334 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:19:06.162619 systemd[1]: Finished issuegen.service - Generate /run/issue. May 10 00:19:06.173311 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 10 00:19:06.179927 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 10 00:19:06.187927 systemd[1]: Started getty@tty1.service - Getty on tty1. May 10 00:19:06.190615 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 10 00:19:06.193459 systemd[1]: Reached target getty.target - Login Prompts. May 10 00:19:06.194988 systemd[1]: Reached target multi-user.target - Multi-User System. May 10 00:19:06.195839 systemd[1]: Startup finished in 770ms (kernel) + 5.835s (initrd) + 4.707s (userspace) = 11.313s. May 10 00:19:06.481117 kubelet[1562]: E0510 00:19:06.480965 1562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:19:06.485273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:19:06.485460 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:19:16.549582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:19:16.557719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:19:16.669629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:19:16.669944 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:19:16.724588 kubelet[1599]: E0510 00:19:16.724527 1599 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:19:16.730130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:19:16.730354 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:19:26.799441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:19:26.812673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:19:26.920001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:19:26.925666 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:19:26.972626 kubelet[1615]: E0510 00:19:26.972565 1615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:19:26.976403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:19:26.976689 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:19:34.939640 systemd-timesyncd[1347]: Contacted time server 5.75.181.179:123 (2.flatcar.pool.ntp.org). May 10 00:19:34.939767 systemd-timesyncd[1347]: Initial clock synchronization to Sat 2025-05-10 00:19:34.939373 UTC. May 10 00:19:34.939854 systemd-resolved[1332]: Clock change detected. Flushing caches. May 10 00:19:36.562843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 10 00:19:36.570620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:19:36.680320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:19:36.685069 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:19:36.726406 kubelet[1632]: E0510 00:19:36.726328 1632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:19:36.730068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:19:36.730263 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:19:46.812842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 10 00:19:46.826662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:19:46.941780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:19:46.946894 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:19:47.004684 kubelet[1649]: E0510 00:19:47.004553 1649 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:19:47.008101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:19:47.008289 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:19:49.404163 update_engine[1459]: I20250510 00:19:49.403405 1459 update_attempter.cc:509] Updating boot flags... May 10 00:19:49.452365 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1665) May 10 00:19:49.512264 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1668) May 10 00:19:57.062561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 10 00:19:57.078733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:19:57.193934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:19:57.209080 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:19:57.256051 kubelet[1682]: E0510 00:19:57.255929 1682 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:19:57.259336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:19:57.259537 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:20:07.312727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 10 00:20:07.319657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:20:07.428550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:20:07.447876 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:20:07.497247 kubelet[1698]: E0510 00:20:07.497180 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:20:07.500995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:20:07.501370 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:20:14.102973 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 10 00:20:14.108938 systemd[1]: Started sshd@0-91.107.204.139:22-60.164.133.37:52200.service - OpenSSH per-connection server daemon (60.164.133.37:52200). May 10 00:20:15.069401 sshd[1706]: Connection closed by authenticating user root 60.164.133.37 port 52200 [preauth] May 10 00:20:15.074640 systemd[1]: sshd@0-91.107.204.139:22-60.164.133.37:52200.service: Deactivated successfully. May 10 00:20:15.282827 systemd[1]: Started sshd@1-91.107.204.139:22-60.164.133.37:53678.service - OpenSSH per-connection server daemon (60.164.133.37:53678). May 10 00:20:16.237990 sshd[1711]: Connection closed by authenticating user root 60.164.133.37 port 53678 [preauth] May 10 00:20:16.242036 systemd[1]: sshd@1-91.107.204.139:22-60.164.133.37:53678.service: Deactivated successfully. May 10 00:20:16.447846 systemd[1]: Started sshd@2-91.107.204.139:22-60.164.133.37:54758.service - OpenSSH per-connection server daemon (60.164.133.37:54758). May 10 00:20:17.401751 sshd[1716]: Connection closed by authenticating user root 60.164.133.37 port 54758 [preauth] May 10 00:20:17.404415 systemd[1]: sshd@2-91.107.204.139:22-60.164.133.37:54758.service: Deactivated successfully. May 10 00:20:17.562224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 10 00:20:17.569708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:20:17.608606 systemd[1]: Started sshd@3-91.107.204.139:22-60.164.133.37:56088.service - OpenSSH per-connection server daemon (60.164.133.37:56088). May 10 00:20:17.677866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:20:17.682860 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:20:17.733392 kubelet[1731]: E0510 00:20:17.733350 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:20:17.737097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:20:17.737260 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:20:18.558468 sshd[1724]: Connection closed by authenticating user root 60.164.133.37 port 56088 [preauth] May 10 00:20:18.560994 systemd[1]: sshd@3-91.107.204.139:22-60.164.133.37:56088.service: Deactivated successfully. May 10 00:20:18.758672 systemd[1]: Started sshd@4-91.107.204.139:22-60.164.133.37:57492.service - OpenSSH per-connection server daemon (60.164.133.37:57492). May 10 00:20:19.710387 sshd[1741]: Connection closed by authenticating user root 60.164.133.37 port 57492 [preauth] May 10 00:20:19.713076 systemd[1]: sshd@4-91.107.204.139:22-60.164.133.37:57492.service: Deactivated successfully. May 10 00:20:19.915865 systemd[1]: Started sshd@5-91.107.204.139:22-60.164.133.37:58934.service - OpenSSH per-connection server daemon (60.164.133.37:58934). May 10 00:20:20.858939 sshd[1746]: Connection closed by authenticating user root 60.164.133.37 port 58934 [preauth] May 10 00:20:20.861876 systemd[1]: sshd@5-91.107.204.139:22-60.164.133.37:58934.service: Deactivated successfully. May 10 00:20:21.054015 systemd[1]: Started sshd@6-91.107.204.139:22-60.164.133.37:60286.service - OpenSSH per-connection server daemon (60.164.133.37:60286). May 10 00:20:22.019099 sshd[1751]: Connection closed by authenticating user root 60.164.133.37 port 60286 [preauth] May 10 00:20:22.023605 systemd[1]: sshd@6-91.107.204.139:22-60.164.133.37:60286.service: Deactivated successfully. May 10 00:20:22.220807 systemd[1]: Started sshd@7-91.107.204.139:22-60.164.133.37:33472.service - OpenSSH per-connection server daemon (60.164.133.37:33472). May 10 00:20:23.167072 sshd[1756]: Connection closed by authenticating user root 60.164.133.37 port 33472 [preauth] May 10 00:20:23.170231 systemd[1]: sshd@7-91.107.204.139:22-60.164.133.37:33472.service: Deactivated successfully. May 10 00:20:23.367022 systemd[1]: Started sshd@8-91.107.204.139:22-60.164.133.37:34752.service - OpenSSH per-connection server daemon (60.164.133.37:34752). May 10 00:20:24.311364 sshd[1761]: Connection closed by authenticating user root 60.164.133.37 port 34752 [preauth] May 10 00:20:24.315861 systemd[1]: sshd@8-91.107.204.139:22-60.164.133.37:34752.service: Deactivated successfully. May 10 00:20:24.515693 systemd[1]: Started sshd@9-91.107.204.139:22-60.164.133.37:36028.service - OpenSSH per-connection server daemon (60.164.133.37:36028). May 10 00:20:25.469365 sshd[1766]: Connection closed by authenticating user root 60.164.133.37 port 36028 [preauth] May 10 00:20:25.472824 systemd[1]: sshd@9-91.107.204.139:22-60.164.133.37:36028.service: Deactivated successfully. May 10 00:20:25.681574 systemd[1]: Started sshd@10-91.107.204.139:22-60.164.133.37:37548.service - OpenSSH per-connection server daemon (60.164.133.37:37548). May 10 00:20:26.633052 sshd[1771]: Connection closed by authenticating user root 60.164.133.37 port 37548 [preauth] May 10 00:20:26.635888 systemd[1]: sshd@10-91.107.204.139:22-60.164.133.37:37548.service: Deactivated successfully. May 10 00:20:26.839816 systemd[1]: Started sshd@11-91.107.204.139:22-60.164.133.37:38976.service - OpenSSH per-connection server daemon (60.164.133.37:38976). May 10 00:20:27.783794 sshd[1776]: Connection closed by authenticating user root 60.164.133.37 port 38976 [preauth] May 10 00:20:27.788058 systemd[1]: sshd@11-91.107.204.139:22-60.164.133.37:38976.service: Deactivated successfully. May 10 00:20:27.792202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 10 00:20:27.800692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:20:27.912521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:20:27.925814 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:20:27.968543 kubelet[1788]: E0510 00:20:27.968442 1788 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:20:27.983021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:20:27.984333 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:20:27.994233 systemd[1]: Started sshd@12-91.107.204.139:22-60.164.133.37:40278.service - OpenSSH per-connection server daemon (60.164.133.37:40278). May 10 00:20:28.943684 sshd[1797]: Connection closed by authenticating user root 60.164.133.37 port 40278 [preauth] May 10 00:20:28.947363 systemd[1]: sshd@12-91.107.204.139:22-60.164.133.37:40278.service: Deactivated successfully. May 10 00:20:29.152735 systemd[1]: Started sshd@13-91.107.204.139:22-60.164.133.37:41568.service - OpenSSH per-connection server daemon (60.164.133.37:41568). May 10 00:20:30.114099 sshd[1802]: Connection closed by authenticating user root 60.164.133.37 port 41568 [preauth] May 10 00:20:30.116649 systemd[1]: sshd@13-91.107.204.139:22-60.164.133.37:41568.service: Deactivated successfully. May 10 00:20:30.316952 systemd[1]: Started sshd@14-91.107.204.139:22-60.164.133.37:42940.service - OpenSSH per-connection server daemon (60.164.133.37:42940). May 10 00:20:31.259548 sshd[1807]: Connection closed by authenticating user root 60.164.133.37 port 42940 [preauth] May 10 00:20:31.262990 systemd[1]: sshd@14-91.107.204.139:22-60.164.133.37:42940.service: Deactivated successfully. May 10 00:20:31.475885 systemd[1]: Started sshd@15-91.107.204.139:22-60.164.133.37:44260.service - OpenSSH per-connection server daemon (60.164.133.37:44260). May 10 00:20:32.441345 sshd[1812]: Connection closed by authenticating user root 60.164.133.37 port 44260 [preauth] May 10 00:20:32.444238 systemd[1]: sshd@15-91.107.204.139:22-60.164.133.37:44260.service: Deactivated successfully. May 10 00:20:32.746737 systemd[1]: Started sshd@16-91.107.204.139:22-60.164.133.37:45818.service - OpenSSH per-connection server daemon (60.164.133.37:45818). May 10 00:20:33.961937 sshd[1817]: Connection closed by authenticating user root 60.164.133.37 port 45818 [preauth] May 10 00:20:33.964878 systemd[1]: sshd@16-91.107.204.139:22-60.164.133.37:45818.service: Deactivated successfully. May 10 00:20:34.109771 systemd[1]: Started sshd@17-91.107.204.139:22-60.164.133.37:47580.service - OpenSSH per-connection server daemon (60.164.133.37:47580). May 10 00:20:35.050976 sshd[1822]: Connection closed by authenticating user root 60.164.133.37 port 47580 [preauth] May 10 00:20:35.053820 systemd[1]: sshd@17-91.107.204.139:22-60.164.133.37:47580.service: Deactivated successfully. May 10 00:20:35.254219 systemd[1]: Started sshd@18-91.107.204.139:22-60.164.133.37:48754.service - OpenSSH per-connection server daemon (60.164.133.37:48754). May 10 00:20:36.222697 sshd[1827]: Connection closed by authenticating user root 60.164.133.37 port 48754 [preauth] May 10 00:20:36.225839 systemd[1]: sshd@18-91.107.204.139:22-60.164.133.37:48754.service: Deactivated successfully. May 10 00:20:36.430747 systemd[1]: Started sshd@19-91.107.204.139:22-60.164.133.37:50072.service - OpenSSH per-connection server daemon (60.164.133.37:50072). May 10 00:20:37.397377 sshd[1832]: Connection closed by authenticating user root 60.164.133.37 port 50072 [preauth] May 10 00:20:37.400810 systemd[1]: sshd@19-91.107.204.139:22-60.164.133.37:50072.service: Deactivated successfully. May 10 00:20:37.607767 systemd[1]: Started sshd@20-91.107.204.139:22-60.164.133.37:51528.service - OpenSSH per-connection server daemon (60.164.133.37:51528). May 10 00:20:38.062762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 10 00:20:38.070531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:20:38.185879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:20:38.191754 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:20:38.237253 kubelet[1847]: E0510 00:20:38.237140 1847 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:20:38.241109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:20:38.241386 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:20:38.558992 sshd[1837]: Connection closed by authenticating user root 60.164.133.37 port 51528 [preauth] May 10 00:20:38.561900 systemd[1]: sshd@20-91.107.204.139:22-60.164.133.37:51528.service: Deactivated successfully. May 10 00:20:38.762756 systemd[1]: Started sshd@21-91.107.204.139:22-60.164.133.37:53020.service - OpenSSH per-connection server daemon (60.164.133.37:53020). May 10 00:20:39.702352 sshd[1858]: Connection closed by authenticating user root 60.164.133.37 port 53020 [preauth] May 10 00:20:39.706267 systemd[1]: sshd@21-91.107.204.139:22-60.164.133.37:53020.service: Deactivated successfully. May 10 00:20:39.912803 systemd[1]: Started sshd@22-91.107.204.139:22-60.164.133.37:54256.service - OpenSSH per-connection server daemon (60.164.133.37:54256). May 10 00:20:40.862283 sshd[1863]: Connection closed by authenticating user root 60.164.133.37 port 54256 [preauth] May 10 00:20:40.864930 systemd[1]: sshd@22-91.107.204.139:22-60.164.133.37:54256.service: Deactivated successfully. May 10 00:20:41.068655 systemd[1]: Started sshd@23-91.107.204.139:22-60.164.133.37:55618.service - OpenSSH per-connection server daemon (60.164.133.37:55618). May 10 00:20:42.017036 sshd[1868]: Connection closed by authenticating user root 60.164.133.37 port 55618 [preauth] May 10 00:20:42.021361 systemd[1]: sshd@23-91.107.204.139:22-60.164.133.37:55618.service: Deactivated successfully. May 10 00:20:42.216778 systemd[1]: Started sshd@24-91.107.204.139:22-60.164.133.37:56904.service - OpenSSH per-connection server daemon (60.164.133.37:56904). May 10 00:20:43.166423 sshd[1873]: Connection closed by authenticating user root 60.164.133.37 port 56904 [preauth] May 10 00:20:43.168004 systemd[1]: sshd@24-91.107.204.139:22-60.164.133.37:56904.service: Deactivated successfully. May 10 00:20:43.371059 systemd[1]: Started sshd@25-91.107.204.139:22-60.164.133.37:58154.service - OpenSSH per-connection server daemon (60.164.133.37:58154). May 10 00:20:44.317349 sshd[1878]: Connection closed by authenticating user root 60.164.133.37 port 58154 [preauth] May 10 00:20:44.321148 systemd[1]: sshd@25-91.107.204.139:22-60.164.133.37:58154.service: Deactivated successfully. May 10 00:20:44.523799 systemd[1]: Started sshd@26-91.107.204.139:22-60.164.133.37:59756.service - OpenSSH per-connection server daemon (60.164.133.37:59756). May 10 00:20:45.460365 sshd[1883]: Connection closed by authenticating user root 60.164.133.37 port 59756 [preauth] May 10 00:20:45.464200 systemd[1]: sshd@26-91.107.204.139:22-60.164.133.37:59756.service: Deactivated successfully. May 10 00:20:45.672632 systemd[1]: Started sshd@27-91.107.204.139:22-60.164.133.37:32992.service - OpenSSH per-connection server daemon (60.164.133.37:32992). May 10 00:20:46.560553 systemd[1]: Started sshd@28-91.107.204.139:22-147.75.109.163:39490.service - OpenSSH per-connection server daemon (147.75.109.163:39490). May 10 00:20:46.649040 sshd[1888]: Connection closed by authenticating user root 60.164.133.37 port 32992 [preauth] May 10 00:20:46.652513 systemd[1]: sshd@27-91.107.204.139:22-60.164.133.37:32992.service: Deactivated successfully. May 10 00:20:46.839574 systemd[1]: Started sshd@29-91.107.204.139:22-60.164.133.37:34180.service - OpenSSH per-connection server daemon (60.164.133.37:34180). May 10 00:20:47.570341 sshd[1891]: Accepted publickey for core from 147.75.109.163 port 39490 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:20:47.573540 sshd[1891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:20:47.587824 systemd-logind[1458]: New session 1 of user core. May 10 00:20:47.588473 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 10 00:20:47.594649 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 10 00:20:47.611103 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 10 00:20:47.622059 systemd[1]: Starting user@500.service - User Manager for UID 500... May 10 00:20:47.626135 (systemd)[1900]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:20:47.742174 systemd[1900]: Queued start job for default target default.target. May 10 00:20:47.755326 systemd[1900]: Created slice app.slice - User Application Slice. May 10 00:20:47.755416 systemd[1900]: Reached target paths.target - Paths. May 10 00:20:47.755451 systemd[1900]: Reached target timers.target - Timers. May 10 00:20:47.757645 systemd[1900]: Starting dbus.socket - D-Bus User Message Bus Socket... May 10 00:20:47.773225 systemd[1900]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 10 00:20:47.773550 systemd[1900]: Reached target sockets.target - Sockets. May 10 00:20:47.773589 systemd[1900]: Reached target basic.target - Basic System. May 10 00:20:47.773672 systemd[1900]: Reached target default.target - Main User Target. May 10 00:20:47.773733 systemd[1900]: Startup finished in 140ms. May 10 00:20:47.773872 systemd[1]: Started user@500.service - User Manager for UID 500. May 10 00:20:47.778521 systemd[1]: Started session-1.scope - Session 1 of User core. May 10 00:20:47.785980 sshd[1896]: Connection closed by authenticating user root 60.164.133.37 port 34180 [preauth] May 10 00:20:47.788018 systemd[1]: sshd@29-91.107.204.139:22-60.164.133.37:34180.service: Deactivated successfully. May 10 00:20:48.008830 systemd[1]: Started sshd@30-91.107.204.139:22-60.164.133.37:35488.service - OpenSSH per-connection server daemon (60.164.133.37:35488). May 10 00:20:48.312613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 10 00:20:48.323701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:20:48.426168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:20:48.436986 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:20:48.483589 systemd[1]: Started sshd@31-91.107.204.139:22-147.75.109.163:38446.service - OpenSSH per-connection server daemon (147.75.109.163:38446). May 10 00:20:48.494419 kubelet[1923]: E0510 00:20:48.494380 1923 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:20:48.497041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:20:48.497194 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:20:48.959434 sshd[1912]: Connection closed by authenticating user root 60.164.133.37 port 35488 [preauth] May 10 00:20:48.960668 systemd[1]: sshd@30-91.107.204.139:22-60.164.133.37:35488.service: Deactivated successfully. May 10 00:20:49.162665 systemd[1]: Started sshd@32-91.107.204.139:22-60.164.133.37:36880.service - OpenSSH per-connection server daemon (60.164.133.37:36880). May 10 00:20:49.487164 sshd[1932]: Accepted publickey for core from 147.75.109.163 port 38446 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:20:49.489321 sshd[1932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:20:49.495606 systemd-logind[1458]: New session 2 of user core. May 10 00:20:49.500662 systemd[1]: Started session-2.scope - Session 2 of User core. May 10 00:20:50.110502 sshd[1938]: Connection closed by authenticating user root 60.164.133.37 port 36880 [preauth] May 10 00:20:50.113959 systemd[1]: sshd@32-91.107.204.139:22-60.164.133.37:36880.service: Deactivated successfully. May 10 00:20:50.183074 sshd[1932]: pam_unix(sshd:session): session closed for user core May 10 00:20:50.188574 systemd[1]: sshd@31-91.107.204.139:22-147.75.109.163:38446.service: Deactivated successfully. May 10 00:20:50.190284 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:20:50.192772 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. May 10 00:20:50.194025 systemd-logind[1458]: Removed session 2. May 10 00:20:50.316802 systemd[1]: Started sshd@33-91.107.204.139:22-60.164.133.37:38270.service - OpenSSH per-connection server daemon (60.164.133.37:38270). May 10 00:20:50.367742 systemd[1]: Started sshd@34-91.107.204.139:22-147.75.109.163:38460.service - OpenSSH per-connection server daemon (147.75.109.163:38460). May 10 00:20:51.267486 sshd[1947]: Connection closed by authenticating user root 60.164.133.37 port 38270 [preauth] May 10 00:20:51.266748 systemd[1]: sshd@33-91.107.204.139:22-60.164.133.37:38270.service: Deactivated successfully. May 10 00:20:51.380898 sshd[1950]: Accepted publickey for core from 147.75.109.163 port 38460 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:20:51.384105 sshd[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:20:51.390231 systemd-logind[1458]: New session 3 of user core. May 10 00:20:51.404616 systemd[1]: Started session-3.scope - Session 3 of User core. May 10 00:20:51.465614 systemd[1]: Started sshd@35-91.107.204.139:22-60.164.133.37:39616.service - OpenSSH per-connection server daemon (60.164.133.37:39616). May 10 00:20:52.081748 sshd[1950]: pam_unix(sshd:session): session closed for user core May 10 00:20:52.087170 systemd[1]: sshd@34-91.107.204.139:22-147.75.109.163:38460.service: Deactivated successfully. May 10 00:20:52.089808 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:20:52.091163 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. May 10 00:20:52.093707 systemd-logind[1458]: Removed session 3. May 10 00:20:52.261610 systemd[1]: Started sshd@36-91.107.204.139:22-147.75.109.163:38476.service - OpenSSH per-connection server daemon (147.75.109.163:38476). May 10 00:20:52.409492 sshd[1956]: Connection closed by authenticating user root 60.164.133.37 port 39616 [preauth] May 10 00:20:52.410949 systemd[1]: sshd@35-91.107.204.139:22-60.164.133.37:39616.service: Deactivated successfully. May 10 00:20:52.618677 systemd[1]: Started sshd@37-91.107.204.139:22-60.164.133.37:40926.service - OpenSSH per-connection server daemon (60.164.133.37:40926). May 10 00:20:53.287148 sshd[1962]: Accepted publickey for core from 147.75.109.163 port 38476 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:20:53.290209 sshd[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:20:53.297503 systemd-logind[1458]: New session 4 of user core. May 10 00:20:53.310850 systemd[1]: Started session-4.scope - Session 4 of User core. May 10 00:20:53.583208 sshd[1967]: Connection closed by authenticating user root 60.164.133.37 port 40926 [preauth] May 10 00:20:53.586802 systemd[1]: sshd@37-91.107.204.139:22-60.164.133.37:40926.service: Deactivated successfully. May 10 00:20:53.783648 systemd[1]: Started sshd@38-91.107.204.139:22-60.164.133.37:42264.service - OpenSSH per-connection server daemon (60.164.133.37:42264). May 10 00:20:53.992748 sshd[1962]: pam_unix(sshd:session): session closed for user core May 10 00:20:53.997250 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. May 10 00:20:53.999068 systemd[1]: sshd@36-91.107.204.139:22-147.75.109.163:38476.service: Deactivated successfully. May 10 00:20:54.001263 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:20:54.002910 systemd-logind[1458]: Removed session 4. May 10 00:20:54.169413 systemd[1]: Started sshd@39-91.107.204.139:22-147.75.109.163:38486.service - OpenSSH per-connection server daemon (147.75.109.163:38486). May 10 00:20:54.723956 sshd[1973]: Connection closed by authenticating user root 60.164.133.37 port 42264 [preauth] May 10 00:20:54.728091 systemd[1]: sshd@38-91.107.204.139:22-60.164.133.37:42264.service: Deactivated successfully. May 10 00:20:54.927757 systemd[1]: Started sshd@40-91.107.204.139:22-60.164.133.37:43456.service - OpenSSH per-connection server daemon (60.164.133.37:43456). May 10 00:20:55.164734 sshd[1979]: Accepted publickey for core from 147.75.109.163 port 38486 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:20:55.166837 sshd[1979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:20:55.171991 systemd-logind[1458]: New session 5 of user core. May 10 00:20:55.180609 systemd[1]: Started session-5.scope - Session 5 of User core. May 10 00:20:55.704973 sudo[1987]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 10 00:20:55.705277 sudo[1987]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:20:55.727505 sudo[1987]: pam_unix(sudo:session): session closed for user root May 10 00:20:55.865096 sshd[1984]: Connection closed by authenticating user root 60.164.133.37 port 43456 [preauth] May 10 00:20:55.869751 systemd[1]: sshd@40-91.107.204.139:22-60.164.133.37:43456.service: Deactivated successfully. May 10 00:20:55.890810 sshd[1979]: pam_unix(sshd:session): session closed for user core May 10 00:20:55.895944 systemd[1]: sshd@39-91.107.204.139:22-147.75.109.163:38486.service: Deactivated successfully. May 10 00:20:55.897563 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:20:55.900093 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. May 10 00:20:55.902125 systemd-logind[1458]: Removed session 5. May 10 00:20:56.084170 systemd[1]: Started sshd@41-91.107.204.139:22-60.164.133.37:44864.service - OpenSSH per-connection server daemon (60.164.133.37:44864). May 10 00:20:56.087670 systemd[1]: Started sshd@42-91.107.204.139:22-147.75.109.163:38490.service - OpenSSH per-connection server daemon (147.75.109.163:38490). May 10 00:20:57.040470 sshd[1995]: Connection closed by authenticating user root 60.164.133.37 port 44864 [preauth] May 10 00:20:57.045275 systemd[1]: sshd@41-91.107.204.139:22-60.164.133.37:44864.service: Deactivated successfully. May 10 00:20:57.106216 sshd[1996]: Accepted publickey for core from 147.75.109.163 port 38490 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:20:57.108606 sshd[1996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:20:57.114662 systemd-logind[1458]: New session 6 of user core. May 10 00:20:57.119501 systemd[1]: Started session-6.scope - Session 6 of User core. May 10 00:20:57.239746 systemd[1]: Started sshd@43-91.107.204.139:22-60.164.133.37:46316.service - OpenSSH per-connection server daemon (60.164.133.37:46316). May 10 00:20:57.647905 sudo[2006]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 10 00:20:57.648538 sudo[2006]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:20:57.653488 sudo[2006]: pam_unix(sudo:session): session closed for user root May 10 00:20:57.660594 sudo[2005]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 10 00:20:57.660947 sudo[2005]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:20:57.680749 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 10 00:20:57.683718 auditctl[2009]: No rules May 10 00:20:57.684081 systemd[1]: audit-rules.service: Deactivated successfully. May 10 00:20:57.684269 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 10 00:20:57.687903 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 10 00:20:57.729527 augenrules[2027]: No rules May 10 00:20:57.731484 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 10 00:20:57.732783 sudo[2005]: pam_unix(sudo:session): session closed for user root May 10 00:20:57.898877 sshd[1996]: pam_unix(sshd:session): session closed for user core May 10 00:20:57.904481 systemd[1]: sshd@42-91.107.204.139:22-147.75.109.163:38490.service: Deactivated successfully. May 10 00:20:57.907168 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:20:57.908434 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. May 10 00:20:57.910640 systemd-logind[1458]: Removed session 6. May 10 00:20:58.070534 systemd[1]: Started sshd@44-91.107.204.139:22-147.75.109.163:56110.service - OpenSSH per-connection server daemon (147.75.109.163:56110). May 10 00:20:58.182469 sshd[2003]: Connection closed by authenticating user root 60.164.133.37 port 46316 [preauth] May 10 00:20:58.185603 systemd[1]: sshd@43-91.107.204.139:22-60.164.133.37:46316.service: Deactivated successfully. May 10 00:20:58.390747 systemd[1]: Started sshd@45-91.107.204.139:22-60.164.133.37:47616.service - OpenSSH per-connection server daemon (60.164.133.37:47616). May 10 00:20:58.562685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 10 00:20:58.569855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:20:58.707557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:20:58.710854 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:20:58.754596 kubelet[2050]: E0510 00:20:58.754515 2050 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:20:58.757530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:20:58.757784 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:20:59.068209 sshd[2035]: Accepted publickey for core from 147.75.109.163 port 56110 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:20:59.070830 sshd[2035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:20:59.076277 systemd-logind[1458]: New session 7 of user core. May 10 00:20:59.087979 systemd[1]: Started session-7.scope - Session 7 of User core. May 10 00:20:59.344642 sshd[2040]: Connection closed by authenticating user root 60.164.133.37 port 47616 [preauth] May 10 00:20:59.346789 systemd[1]: sshd@45-91.107.204.139:22-60.164.133.37:47616.service: Deactivated successfully. May 10 00:20:59.544134 systemd[1]: Started sshd@46-91.107.204.139:22-60.164.133.37:48934.service - OpenSSH per-connection server daemon (60.164.133.37:48934). May 10 00:20:59.599784 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:20:59.600904 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 10 00:20:59.890755 systemd[1]: Starting docker.service - Docker Application Container Engine... May 10 00:20:59.891522 (dockerd)[2080]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 10 00:21:00.128860 dockerd[2080]: time="2025-05-10T00:21:00.128790731Z" level=info msg="Starting up" May 10 00:21:00.225084 dockerd[2080]: time="2025-05-10T00:21:00.224695814Z" level=info msg="Loading containers: start." May 10 00:21:00.328324 kernel: Initializing XFRM netlink socket May 10 00:21:00.415916 systemd-networkd[1367]: docker0: Link UP May 10 00:21:00.433391 dockerd[2080]: time="2025-05-10T00:21:00.432597900Z" level=info msg="Loading containers: done." May 10 00:21:00.449169 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1573991964-merged.mount: Deactivated successfully. May 10 00:21:00.455753 dockerd[2080]: time="2025-05-10T00:21:00.455578021Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:21:00.455753 dockerd[2080]: time="2025-05-10T00:21:00.455717741Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 10 00:21:00.456009 dockerd[2080]: time="2025-05-10T00:21:00.455860661Z" level=info msg="Daemon has completed initialization" May 10 00:21:00.484421 sshd[2062]: Connection closed by authenticating user root 60.164.133.37 port 48934 [preauth] May 10 00:21:00.488179 systemd[1]: sshd@46-91.107.204.139:22-60.164.133.37:48934.service: Deactivated successfully. May 10 00:21:00.497077 dockerd[2080]: time="2025-05-10T00:21:00.496190502Z" level=info msg="API listen on /run/docker.sock" May 10 00:21:00.496429 systemd[1]: Started docker.service - Docker Application Container Engine. May 10 00:21:00.685631 systemd[1]: Started sshd@47-91.107.204.139:22-60.164.133.37:50150.service - OpenSSH per-connection server daemon (60.164.133.37:50150). May 10 00:21:01.600868 containerd[1476]: time="2025-05-10T00:21:01.600799053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 10 00:21:01.614362 sshd[2220]: Connection closed by authenticating user root 60.164.133.37 port 50150 [preauth] May 10 00:21:01.617576 systemd[1]: sshd@47-91.107.204.139:22-60.164.133.37:50150.service: Deactivated successfully. May 10 00:21:01.834757 systemd[1]: Started sshd@48-91.107.204.139:22-60.164.133.37:51426.service - OpenSSH per-connection server daemon (60.164.133.37:51426). May 10 00:21:02.275690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144011092.mount: Deactivated successfully. May 10 00:21:02.785967 sshd[2232]: Connection closed by authenticating user root 60.164.133.37 port 51426 [preauth] May 10 00:21:02.789813 systemd[1]: sshd@48-91.107.204.139:22-60.164.133.37:51426.service: Deactivated successfully. May 10 00:21:02.992170 systemd[1]: Started sshd@49-91.107.204.139:22-60.164.133.37:52922.service - OpenSSH per-connection server daemon (60.164.133.37:52922). May 10 00:21:03.957145 sshd[2287]: Connection closed by authenticating user root 60.164.133.37 port 52922 [preauth] May 10 00:21:03.962163 systemd[1]: sshd@49-91.107.204.139:22-60.164.133.37:52922.service: Deactivated successfully. May 10 00:21:04.157867 systemd[1]: Started sshd@50-91.107.204.139:22-60.164.133.37:54446.service - OpenSSH per-connection server daemon (60.164.133.37:54446). May 10 00:21:05.088028 sshd[2292]: Connection closed by authenticating user root 60.164.133.37 port 54446 [preauth] May 10 00:21:05.093773 systemd[1]: sshd@50-91.107.204.139:22-60.164.133.37:54446.service: Deactivated successfully. May 10 00:21:05.267794 containerd[1476]: time="2025-05-10T00:21:05.267680822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:05.269705 containerd[1476]: time="2025-05-10T00:21:05.269657422Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794242" May 10 00:21:05.271323 containerd[1476]: time="2025-05-10T00:21:05.270479022Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:05.273649 containerd[1476]: time="2025-05-10T00:21:05.273593542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:05.275124 containerd[1476]: time="2025-05-10T00:21:05.274855862Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 3.673992329s" May 10 00:21:05.275124 containerd[1476]: time="2025-05-10T00:21:05.274899062Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 10 00:21:05.296622 systemd[1]: Started sshd@51-91.107.204.139:22-60.164.133.37:55726.service - OpenSSH per-connection server daemon (60.164.133.37:55726). May 10 00:21:05.303000 containerd[1476]: time="2025-05-10T00:21:05.302962982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 10 00:21:06.253810 sshd[2305]: Connection closed by authenticating user root 60.164.133.37 port 55726 [preauth] May 10 00:21:06.263601 systemd[1]: sshd@51-91.107.204.139:22-60.164.133.37:55726.service: Deactivated successfully. May 10 00:21:06.452792 systemd[1]: Started sshd@52-91.107.204.139:22-60.164.133.37:56864.service - OpenSSH per-connection server daemon (60.164.133.37:56864). May 10 00:21:07.403216 sshd[2310]: Connection closed by authenticating user root 60.164.133.37 port 56864 [preauth] May 10 00:21:07.406679 systemd[1]: sshd@52-91.107.204.139:22-60.164.133.37:56864.service: Deactivated successfully. May 10 00:21:07.609255 systemd[1]: Started sshd@53-91.107.204.139:22-60.164.133.37:58376.service - OpenSSH per-connection server daemon (60.164.133.37:58376). May 10 00:21:07.734406 containerd[1476]: time="2025-05-10T00:21:07.733224351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:07.736025 containerd[1476]: time="2025-05-10T00:21:07.735987031Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855570" May 10 00:21:07.737487 containerd[1476]: time="2025-05-10T00:21:07.737443711Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:07.741021 containerd[1476]: time="2025-05-10T00:21:07.740983111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:07.743185 containerd[1476]: time="2025-05-10T00:21:07.743139631Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.440134689s" May 10 00:21:07.746311 containerd[1476]: time="2025-05-10T00:21:07.744409311Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 10 00:21:07.771776 containerd[1476]: time="2025-05-10T00:21:07.771707351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 10 00:21:08.569403 sshd[2319]: Connection closed by authenticating user root 60.164.133.37 port 58376 [preauth] May 10 00:21:08.571582 systemd[1]: sshd@53-91.107.204.139:22-60.164.133.37:58376.service: Deactivated successfully. May 10 00:21:08.769543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 10 00:21:08.775550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:21:08.778050 systemd[1]: Started sshd@54-91.107.204.139:22-60.164.133.37:59964.service - OpenSSH per-connection server daemon (60.164.133.37:59964). May 10 00:21:08.898064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:21:08.903356 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:21:08.951517 kubelet[2338]: E0510 00:21:08.951459 2338 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:21:08.954428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:21:08.954609 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:21:09.436046 containerd[1476]: time="2025-05-10T00:21:09.435991700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:09.437351 containerd[1476]: time="2025-05-10T00:21:09.437309140Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263965" May 10 00:21:09.438321 containerd[1476]: time="2025-05-10T00:21:09.438208140Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:09.441853 containerd[1476]: time="2025-05-10T00:21:09.441797300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:09.443419 containerd[1476]: time="2025-05-10T00:21:09.443177620Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.671403869s" May 10 00:21:09.443419 containerd[1476]: time="2025-05-10T00:21:09.443220940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 10 00:21:09.466513 containerd[1476]: time="2025-05-10T00:21:09.466447060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 00:21:09.723573 sshd[2329]: Connection closed by authenticating user root 60.164.133.37 port 59964 [preauth] May 10 00:21:09.728205 systemd[1]: sshd@54-91.107.204.139:22-60.164.133.37:59964.service: Deactivated successfully. May 10 00:21:09.927405 systemd[1]: Started sshd@55-91.107.204.139:22-60.164.133.37:32962.service - OpenSSH per-connection server daemon (60.164.133.37:32962). May 10 00:21:10.388957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282775798.mount: Deactivated successfully. May 10 00:21:10.875774 sshd[2357]: Connection closed by authenticating user root 60.164.133.37 port 32962 [preauth] May 10 00:21:10.877613 systemd[1]: sshd@55-91.107.204.139:22-60.164.133.37:32962.service: Deactivated successfully. May 10 00:21:11.082736 systemd[1]: Started sshd@56-91.107.204.139:22-60.164.133.37:34176.service - OpenSSH per-connection server daemon (60.164.133.37:34176). May 10 00:21:11.453873 containerd[1476]: time="2025-05-10T00:21:11.453769530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:11.455315 containerd[1476]: time="2025-05-10T00:21:11.455098973Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775731" May 10 00:21:11.456378 containerd[1476]: time="2025-05-10T00:21:11.456321495Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:11.459011 containerd[1476]: time="2025-05-10T00:21:11.458943181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:11.460629 containerd[1476]: time="2025-05-10T00:21:11.460159863Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.993646243s" May 10 00:21:11.460629 containerd[1476]: time="2025-05-10T00:21:11.460202143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 10 00:21:11.487969 containerd[1476]: time="2025-05-10T00:21:11.487912840Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:21:12.018651 sshd[2366]: Connection closed by authenticating user root 60.164.133.37 port 34176 [preauth] May 10 00:21:12.023112 systemd[1]: sshd@56-91.107.204.139:22-60.164.133.37:34176.service: Deactivated successfully. May 10 00:21:12.171103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231094984.mount: Deactivated successfully. May 10 00:21:12.219090 systemd[1]: Started sshd@57-91.107.204.139:22-60.164.133.37:35486.service - OpenSSH per-connection server daemon (60.164.133.37:35486). May 10 00:21:13.161484 sshd[2387]: Connection closed by authenticating user root 60.164.133.37 port 35486 [preauth] May 10 00:21:13.166584 systemd[1]: sshd@57-91.107.204.139:22-60.164.133.37:35486.service: Deactivated successfully. May 10 00:21:13.473417 systemd[1]: Started sshd@58-91.107.204.139:22-60.164.133.37:36898.service - OpenSSH per-connection server daemon (60.164.133.37:36898). May 10 00:21:13.496176 containerd[1476]: time="2025-05-10T00:21:13.496068324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:13.497902 containerd[1476]: time="2025-05-10T00:21:13.497854716Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" May 10 00:21:13.498883 containerd[1476]: time="2025-05-10T00:21:13.498790413Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:13.501971 containerd[1476]: time="2025-05-10T00:21:13.501888108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:13.503718 containerd[1476]: time="2025-05-10T00:21:13.503471256Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.015504696s" May 10 00:21:13.503718 containerd[1476]: time="2025-05-10T00:21:13.503515897Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 10 00:21:13.525119 containerd[1476]: time="2025-05-10T00:21:13.525080320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 10 00:21:14.026825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2734553721.mount: Deactivated successfully. May 10 00:21:14.035320 containerd[1476]: time="2025-05-10T00:21:14.033903867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:14.035320 containerd[1476]: time="2025-05-10T00:21:14.035190409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" May 10 00:21:14.036258 containerd[1476]: time="2025-05-10T00:21:14.035522935Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:14.038358 containerd[1476]: time="2025-05-10T00:21:14.038302503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:14.039672 containerd[1476]: time="2025-05-10T00:21:14.039065556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 513.942555ms" May 10 00:21:14.039672 containerd[1476]: time="2025-05-10T00:21:14.039102037Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 10 00:21:14.060066 containerd[1476]: time="2025-05-10T00:21:14.060028558Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 10 00:21:14.670082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517612840.mount: Deactivated successfully. May 10 00:21:14.691377 sshd[2428]: Connection closed by authenticating user root 60.164.133.37 port 36898 [preauth] May 10 00:21:14.693211 systemd[1]: sshd@58-91.107.204.139:22-60.164.133.37:36898.service: Deactivated successfully. May 10 00:21:14.841951 systemd[1]: Started sshd@59-91.107.204.139:22-60.164.133.37:38756.service - OpenSSH per-connection server daemon (60.164.133.37:38756). May 10 00:21:15.799418 sshd[2457]: Connection closed by authenticating user root 60.164.133.37 port 38756 [preauth] May 10 00:21:15.802716 systemd[1]: sshd@59-91.107.204.139:22-60.164.133.37:38756.service: Deactivated successfully. May 10 00:21:16.002725 systemd[1]: Started sshd@60-91.107.204.139:22-60.164.133.37:40168.service - OpenSSH per-connection server daemon (60.164.133.37:40168). May 10 00:21:16.948202 sshd[2490]: Connection closed by authenticating user root 60.164.133.37 port 40168 [preauth] May 10 00:21:16.950591 systemd[1]: sshd@60-91.107.204.139:22-60.164.133.37:40168.service: Deactivated successfully. May 10 00:21:17.148246 systemd[1]: Started sshd@61-91.107.204.139:22-60.164.133.37:41632.service - OpenSSH per-connection server daemon (60.164.133.37:41632). May 10 00:21:18.102191 sshd[2495]: Connection closed by authenticating user root 60.164.133.37 port 41632 [preauth] May 10 00:21:18.105965 systemd[1]: sshd@61-91.107.204.139:22-60.164.133.37:41632.service: Deactivated successfully. May 10 00:21:18.310415 systemd[1]: Started sshd@62-91.107.204.139:22-60.164.133.37:42896.service - OpenSSH per-connection server daemon (60.164.133.37:42896). May 10 00:21:19.023510 containerd[1476]: time="2025-05-10T00:21:19.022636817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:19.024732 containerd[1476]: time="2025-05-10T00:21:19.024683768Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" May 10 00:21:19.025592 containerd[1476]: time="2025-05-10T00:21:19.025499740Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:19.032214 containerd[1476]: time="2025-05-10T00:21:19.031415829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:19.033054 containerd[1476]: time="2025-05-10T00:21:19.033009573Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.972938414s" May 10 00:21:19.033054 containerd[1476]: time="2025-05-10T00:21:19.033047814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 10 00:21:19.062836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 10 00:21:19.072993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:21:19.179816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:21:19.184548 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 10 00:21:19.233060 kubelet[2524]: E0510 00:21:19.233001 2524 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:21:19.236994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:21:19.237447 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:21:19.271622 sshd[2500]: Connection closed by authenticating user root 60.164.133.37 port 42896 [preauth] May 10 00:21:19.274451 systemd[1]: sshd@62-91.107.204.139:22-60.164.133.37:42896.service: Deactivated successfully. May 10 00:21:19.475768 systemd[1]: Started sshd@63-91.107.204.139:22-60.164.133.37:44308.service - OpenSSH per-connection server daemon (60.164.133.37:44308). May 10 00:21:20.429349 sshd[2534]: Connection closed by authenticating user root 60.164.133.37 port 44308 [preauth] May 10 00:21:20.432662 systemd[1]: sshd@63-91.107.204.139:22-60.164.133.37:44308.service: Deactivated successfully. May 10 00:21:20.634586 systemd[1]: Started sshd@64-91.107.204.139:22-60.164.133.37:45870.service - OpenSSH per-connection server daemon (60.164.133.37:45870). May 10 00:21:21.583217 sshd[2589]: Connection closed by authenticating user root 60.164.133.37 port 45870 [preauth] May 10 00:21:21.586228 systemd[1]: sshd@64-91.107.204.139:22-60.164.133.37:45870.service: Deactivated successfully. May 10 00:21:21.780607 systemd[1]: Started sshd@65-91.107.204.139:22-60.164.133.37:47228.service - OpenSSH per-connection server daemon (60.164.133.37:47228). May 10 00:21:22.710326 sshd[2594]: Connection closed by authenticating user root 60.164.133.37 port 47228 [preauth] May 10 00:21:22.713073 systemd[1]: sshd@65-91.107.204.139:22-60.164.133.37:47228.service: Deactivated successfully. May 10 00:21:22.910585 systemd[1]: Started sshd@66-91.107.204.139:22-60.164.133.37:48506.service - OpenSSH per-connection server daemon (60.164.133.37:48506). May 10 00:21:23.845728 sshd[2599]: Connection closed by authenticating user root 60.164.133.37 port 48506 [preauth] May 10 00:21:23.847810 systemd[1]: sshd@66-91.107.204.139:22-60.164.133.37:48506.service: Deactivated successfully. May 10 00:21:24.045695 systemd[1]: Started sshd@67-91.107.204.139:22-60.164.133.37:49794.service - OpenSSH per-connection server daemon (60.164.133.37:49794). May 10 00:21:24.981505 sshd[2604]: Connection closed by authenticating user root 60.164.133.37 port 49794 [preauth] May 10 00:21:24.986730 systemd[1]: sshd@67-91.107.204.139:22-60.164.133.37:49794.service: Deactivated successfully. May 10 00:21:25.193742 systemd[1]: Started sshd@68-91.107.204.139:22-60.164.133.37:51100.service - OpenSSH per-connection server daemon (60.164.133.37:51100). May 10 00:21:26.056261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:21:26.068792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:21:26.095127 systemd[1]: Reloading requested from client PID 2617 ('systemctl') (unit session-7.scope)... May 10 00:21:26.095143 systemd[1]: Reloading... May 10 00:21:26.139116 sshd[2609]: Connection closed by authenticating user root 60.164.133.37 port 51100 [preauth] May 10 00:21:26.206317 zram_generator::config[2659]: No configuration found. May 10 00:21:26.328014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:21:26.397927 systemd[1]: Reloading finished in 302 ms. May 10 00:21:26.434565 systemd[1]: sshd@68-91.107.204.139:22-60.164.133.37:51100.service: Deactivated successfully. May 10 00:21:26.459946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:21:26.464676 systemd[1]: Started sshd@69-91.107.204.139:22-60.164.133.37:52744.service - OpenSSH per-connection server daemon (60.164.133.37:52744). May 10 00:21:26.466555 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:21:26.467253 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:21:26.468725 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:21:26.468895 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:21:26.479762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:21:26.594273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:21:26.609914 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:21:26.655072 kubelet[2715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:21:26.655072 kubelet[2715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:21:26.655072 kubelet[2715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:21:26.655487 kubelet[2715]: I0510 00:21:26.655098 2715 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:21:27.202656 kubelet[2715]: I0510 00:21:27.202582 2715 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:21:27.202656 kubelet[2715]: I0510 00:21:27.202619 2715 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:21:27.204329 kubelet[2715]: I0510 00:21:27.202922 2715 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:21:27.228867 kubelet[2715]: E0510 00:21:27.228801 2715 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://91.107.204.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.229486 kubelet[2715]: I0510 00:21:27.229321 2715 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:21:27.239347 kubelet[2715]: I0510 00:21:27.239322 2715 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:21:27.241009 kubelet[2715]: I0510 00:21:27.240961 2715 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:21:27.241946 kubelet[2715]: I0510 00:21:27.241127 2715 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-2389c948d4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:21:27.241946 kubelet[2715]: I0510 00:21:27.241435 2715 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:21:27.241946 kubelet[2715]: I0510 00:21:27.241446 2715 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:21:27.241946 kubelet[2715]: I0510 00:21:27.241696 2715 state_mem.go:36] "Initialized new in-memory state store" May 10 00:21:27.242992 kubelet[2715]: I0510 00:21:27.242973 2715 kubelet.go:400] "Attempting to sync node with API server" May 10 00:21:27.243076 kubelet[2715]: I0510 00:21:27.243067 2715 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:21:27.243304 kubelet[2715]: I0510 00:21:27.243280 2715 kubelet.go:312] "Adding apiserver pod source" May 10 00:21:27.243374 kubelet[2715]: I0510 00:21:27.243365 2715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:21:27.244602 kubelet[2715]: W0510 00:21:27.244560 2715 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.204.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.244711 kubelet[2715]: E0510 00:21:27.244700 2715 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://91.107.204.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.245757 kubelet[2715]: W0510 00:21:27.245651 2715 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.204.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-2389c948d4&limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.245757 kubelet[2715]: E0510 00:21:27.245721 2715 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://91.107.204.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-2389c948d4&limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.246031 kubelet[2715]: I0510 00:21:27.246015 2715 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:21:27.246476 kubelet[2715]: I0510 00:21:27.246457 2715 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:21:27.246655 kubelet[2715]: W0510 00:21:27.246644 2715 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:21:27.249280 kubelet[2715]: I0510 00:21:27.249249 2715 server.go:1264] "Started kubelet" May 10 00:21:27.253268 kubelet[2715]: I0510 00:21:27.253227 2715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:21:27.260408 kubelet[2715]: I0510 00:21:27.260366 2715 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:21:27.261619 kubelet[2715]: I0510 00:21:27.261586 2715 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:21:27.262687 kubelet[2715]: I0510 00:21:27.262656 2715 reconciler.go:26] "Reconciler: start to sync state" May 10 00:21:27.263170 kubelet[2715]: I0510 00:21:27.262961 2715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:21:27.263170 kubelet[2715]: I0510 00:21:27.262991 2715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:21:27.263245 kubelet[2715]: I0510 00:21:27.263207 2715 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:21:27.263953 kubelet[2715]: W0510 00:21:27.263670 2715 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.204.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.263953 kubelet[2715]: E0510 00:21:27.263715 2715 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://91.107.204.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.263953 kubelet[2715]: E0510 00:21:27.263767 2715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.204.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-2389c948d4?timeout=10s\": dial tcp 91.107.204.139:6443: connect: connection refused" interval="200ms" May 10 00:21:27.263953 kubelet[2715]: E0510 00:21:27.263820 2715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.107.204.139:6443/api/v1/namespaces/default/events\": dial tcp 91.107.204.139:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-2389c948d4.183e0287af2e603e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-2389c948d4,UID:ci-4081-3-3-n-2389c948d4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-2389c948d4,},FirstTimestamp:2025-05-10 00:21:27.249223742 +0000 UTC m=+0.635348031,LastTimestamp:2025-05-10 00:21:27.249223742 +0000 UTC m=+0.635348031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-2389c948d4,}" May 10 00:21:27.265463 kubelet[2715]: I0510 00:21:27.265438 2715 factory.go:221] Registration of the systemd container factory successfully May 10 00:21:27.265555 kubelet[2715]: I0510 00:21:27.265533 2715 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:21:27.267688 kubelet[2715]: I0510 00:21:27.266483 2715 server.go:455] "Adding debug handlers to kubelet server" May 10 00:21:27.268878 kubelet[2715]: I0510 00:21:27.268845 2715 factory.go:221] Registration of the containerd container factory successfully May 10 00:21:27.276634 kubelet[2715]: I0510 00:21:27.276600 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:21:27.278045 kubelet[2715]: I0510 00:21:27.278023 2715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:21:27.278265 kubelet[2715]: I0510 00:21:27.278255 2715 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:21:27.278360 kubelet[2715]: I0510 00:21:27.278350 2715 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:21:27.278511 kubelet[2715]: E0510 00:21:27.278484 2715 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:21:27.285466 kubelet[2715]: W0510 00:21:27.285358 2715 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.204.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.285683 kubelet[2715]: E0510 00:21:27.285656 2715 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://91.107.204.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:27.290422 kubelet[2715]: E0510 00:21:27.290398 2715 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:21:27.296939 kubelet[2715]: I0510 00:21:27.296904 2715 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:21:27.297345 kubelet[2715]: I0510 00:21:27.297280 2715 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:21:27.297563 kubelet[2715]: I0510 00:21:27.297541 2715 state_mem.go:36] "Initialized new in-memory state store" May 10 00:21:27.301124 kubelet[2715]: I0510 00:21:27.301104 2715 policy_none.go:49] "None policy: Start" May 10 00:21:27.302834 kubelet[2715]: I0510 00:21:27.302452 2715 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:21:27.302834 kubelet[2715]: I0510 00:21:27.302480 2715 state_mem.go:35] "Initializing new in-memory state store" May 10 00:21:27.312748 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 10 00:21:27.332649 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 10 00:21:27.336144 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 10 00:21:27.344402 kubelet[2715]: I0510 00:21:27.344349 2715 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:21:27.344779 kubelet[2715]: I0510 00:21:27.344726 2715 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:21:27.344913 kubelet[2715]: I0510 00:21:27.344852 2715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:21:27.349551 kubelet[2715]: E0510 00:21:27.349122 2715 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-2389c948d4\" not found" May 10 00:21:27.363446 kubelet[2715]: I0510 00:21:27.363341 2715 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:27.364066 kubelet[2715]: E0510 00:21:27.364019 2715 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.107.204.139:6443/api/v1/nodes\": dial tcp 91.107.204.139:6443: connect: connection refused" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:27.379519 kubelet[2715]: I0510 00:21:27.379446 2715 topology_manager.go:215] "Topology Admit Handler" podUID="371e9d14f51cb6001b93f65b4772119b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.382584 kubelet[2715]: I0510 00:21:27.382457 2715 topology_manager.go:215] "Topology Admit Handler" podUID="5ceaf1440399a212b1fa4ee4cf7ffa7e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.384805 kubelet[2715]: I0510 00:21:27.384737 2715 topology_manager.go:215] "Topology Admit Handler" podUID="f416adeb7985764aa50aa7aabad9c039" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.393083 systemd[1]: Created slice kubepods-burstable-pod371e9d14f51cb6001b93f65b4772119b.slice - libcontainer container kubepods-burstable-pod371e9d14f51cb6001b93f65b4772119b.slice. May 10 00:21:27.396890 systemd[1]: Created slice kubepods-burstable-pod5ceaf1440399a212b1fa4ee4cf7ffa7e.slice - libcontainer container kubepods-burstable-pod5ceaf1440399a212b1fa4ee4cf7ffa7e.slice. May 10 00:21:27.416556 sshd[2704]: Connection closed by authenticating user root 60.164.133.37 port 52744 [preauth] May 10 00:21:27.419728 systemd[1]: sshd@69-91.107.204.139:22-60.164.133.37:52744.service: Deactivated successfully. May 10 00:21:27.423772 systemd[1]: Created slice kubepods-burstable-podf416adeb7985764aa50aa7aabad9c039.slice - libcontainer container kubepods-burstable-podf416adeb7985764aa50aa7aabad9c039.slice. May 10 00:21:27.464472 kubelet[2715]: I0510 00:21:27.463202 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/371e9d14f51cb6001b93f65b4772119b-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-2389c948d4\" (UID: \"371e9d14f51cb6001b93f65b4772119b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464472 kubelet[2715]: I0510 00:21:27.463494 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/371e9d14f51cb6001b93f65b4772119b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-2389c948d4\" (UID: \"371e9d14f51cb6001b93f65b4772119b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464472 kubelet[2715]: I0510 00:21:27.463546 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/371e9d14f51cb6001b93f65b4772119b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-2389c948d4\" (UID: \"371e9d14f51cb6001b93f65b4772119b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464472 kubelet[2715]: I0510 00:21:27.463585 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464472 kubelet[2715]: I0510 00:21:27.463620 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464846 kubelet[2715]: I0510 00:21:27.463654 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464846 kubelet[2715]: I0510 00:21:27.463688 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464846 kubelet[2715]: I0510 00:21:27.463735 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464846 kubelet[2715]: I0510 00:21:27.463775 2715 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f416adeb7985764aa50aa7aabad9c039-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-2389c948d4\" (UID: \"f416adeb7985764aa50aa7aabad9c039\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-2389c948d4" May 10 00:21:27.464846 kubelet[2715]: E0510 00:21:27.464762 2715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.204.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-2389c948d4?timeout=10s\": dial tcp 91.107.204.139:6443: connect: connection refused" interval="400ms" May 10 00:21:27.566608 kubelet[2715]: I0510 00:21:27.566561 2715 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:27.567060 kubelet[2715]: E0510 00:21:27.567019 2715 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.107.204.139:6443/api/v1/nodes\": dial tcp 91.107.204.139:6443: connect: connection refused" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:27.624952 systemd[1]: Started sshd@70-91.107.204.139:22-60.164.133.37:54314.service - OpenSSH per-connection server daemon (60.164.133.37:54314). May 10 00:21:27.717993 containerd[1476]: time="2025-05-10T00:21:27.717715942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-2389c948d4,Uid:371e9d14f51cb6001b93f65b4772119b,Namespace:kube-system,Attempt:0,}" May 10 00:21:27.719096 containerd[1476]: time="2025-05-10T00:21:27.718713794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-2389c948d4,Uid:5ceaf1440399a212b1fa4ee4cf7ffa7e,Namespace:kube-system,Attempt:0,}" May 10 00:21:27.726971 containerd[1476]: time="2025-05-10T00:21:27.726863573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-2389c948d4,Uid:f416adeb7985764aa50aa7aabad9c039,Namespace:kube-system,Attempt:0,}" May 10 00:21:27.865504 kubelet[2715]: E0510 00:21:27.865435 2715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.204.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-2389c948d4?timeout=10s\": dial tcp 91.107.204.139:6443: connect: connection refused" interval="800ms" May 10 00:21:27.972172 kubelet[2715]: I0510 00:21:27.972052 2715 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:27.972800 kubelet[2715]: E0510 00:21:27.972759 2715 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://91.107.204.139:6443/api/v1/nodes\": dial tcp 91.107.204.139:6443: connect: connection refused" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:28.101348 kubelet[2715]: W0510 00:21:28.101263 2715 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.204.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:28.101541 kubelet[2715]: E0510 00:21:28.101527 2715 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://91.107.204.139:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:28.168931 kubelet[2715]: W0510 00:21:28.168821 2715 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.204.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:28.168931 kubelet[2715]: E0510 00:21:28.168911 2715 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://91.107.204.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:28.208772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625162729.mount: Deactivated successfully. May 10 00:21:28.216009 containerd[1476]: time="2025-05-10T00:21:28.215813153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:21:28.218053 containerd[1476]: time="2025-05-10T00:21:28.217948858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 10 00:21:28.218216 containerd[1476]: time="2025-05-10T00:21:28.218110780Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:21:28.219841 containerd[1476]: time="2025-05-10T00:21:28.219787680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:21:28.222081 containerd[1476]: time="2025-05-10T00:21:28.222014866Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:21:28.224606 containerd[1476]: time="2025-05-10T00:21:28.223907649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 10 00:21:28.224606 containerd[1476]: time="2025-05-10T00:21:28.224049530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:21:28.229322 containerd[1476]: time="2025-05-10T00:21:28.227900096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.065393ms" May 10 00:21:28.229322 containerd[1476]: time="2025-05-10T00:21:28.228752786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 10 00:21:28.231219 containerd[1476]: time="2025-05-10T00:21:28.231181134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 512.282658ms" May 10 00:21:28.233282 containerd[1476]: time="2025-05-10T00:21:28.233243639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 506.287625ms" May 10 00:21:28.353319 containerd[1476]: time="2025-05-10T00:21:28.352953892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:21:28.353319 containerd[1476]: time="2025-05-10T00:21:28.353012853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:21:28.353319 containerd[1476]: time="2025-05-10T00:21:28.353033293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:28.353319 containerd[1476]: time="2025-05-10T00:21:28.352662129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:21:28.353319 containerd[1476]: time="2025-05-10T00:21:28.352724530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:21:28.353319 containerd[1476]: time="2025-05-10T00:21:28.352736170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:28.353319 containerd[1476]: time="2025-05-10T00:21:28.352806371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:28.353758 containerd[1476]: time="2025-05-10T00:21:28.353260736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:28.356721 containerd[1476]: time="2025-05-10T00:21:28.356470694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:21:28.356721 containerd[1476]: time="2025-05-10T00:21:28.356518335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:21:28.356721 containerd[1476]: time="2025-05-10T00:21:28.356528615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:28.356721 containerd[1476]: time="2025-05-10T00:21:28.356603856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:28.375611 systemd[1]: Started cri-containerd-a9b5f74382a81a5f64f62a411a5086a6ccfa64d43e4a49322bc3b6c82917e7f1.scope - libcontainer container a9b5f74382a81a5f64f62a411a5086a6ccfa64d43e4a49322bc3b6c82917e7f1. May 10 00:21:28.388654 systemd[1]: Started cri-containerd-4827ba46c955f9a72c216a91f91802fe553c7a915a062ccc22065d7139140410.scope - libcontainer container 4827ba46c955f9a72c216a91f91802fe553c7a915a062ccc22065d7139140410. May 10 00:21:28.391251 systemd[1]: Started cri-containerd-8133a9657e73935a12db3280869e4f41e3045b6447d344b3a289edf97ba5a160.scope - libcontainer container 8133a9657e73935a12db3280869e4f41e3045b6447d344b3a289edf97ba5a160. May 10 00:21:28.429780 containerd[1476]: time="2025-05-10T00:21:28.429561317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-2389c948d4,Uid:f416adeb7985764aa50aa7aabad9c039,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9b5f74382a81a5f64f62a411a5086a6ccfa64d43e4a49322bc3b6c82917e7f1\"" May 10 00:21:28.439797 containerd[1476]: time="2025-05-10T00:21:28.439716517Z" level=info msg="CreateContainer within sandbox \"a9b5f74382a81a5f64f62a411a5086a6ccfa64d43e4a49322bc3b6c82917e7f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:21:28.465783 containerd[1476]: time="2025-05-10T00:21:28.465283019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-2389c948d4,Uid:371e9d14f51cb6001b93f65b4772119b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8133a9657e73935a12db3280869e4f41e3045b6447d344b3a289edf97ba5a160\"" May 10 00:21:28.468984 containerd[1476]: time="2025-05-10T00:21:28.468857261Z" level=info msg="CreateContainer within sandbox \"a9b5f74382a81a5f64f62a411a5086a6ccfa64d43e4a49322bc3b6c82917e7f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47\"" May 10 00:21:28.469761 containerd[1476]: time="2025-05-10T00:21:28.469731151Z" level=info msg="StartContainer for \"4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47\"" May 10 00:21:28.470746 containerd[1476]: time="2025-05-10T00:21:28.470719203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-2389c948d4,Uid:5ceaf1440399a212b1fa4ee4cf7ffa7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4827ba46c955f9a72c216a91f91802fe553c7a915a062ccc22065d7139140410\"" May 10 00:21:28.471783 containerd[1476]: time="2025-05-10T00:21:28.471743295Z" level=info msg="CreateContainer within sandbox \"8133a9657e73935a12db3280869e4f41e3045b6447d344b3a289edf97ba5a160\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:21:28.477202 containerd[1476]: time="2025-05-10T00:21:28.475770303Z" level=info msg="CreateContainer within sandbox \"4827ba46c955f9a72c216a91f91802fe553c7a915a062ccc22065d7139140410\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:21:28.484948 kubelet[2715]: W0510 00:21:28.484837 2715 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.204.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-2389c948d4&limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:28.484948 kubelet[2715]: E0510 00:21:28.484906 2715 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://91.107.204.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-2389c948d4&limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:28.494585 containerd[1476]: time="2025-05-10T00:21:28.494415123Z" level=info msg="CreateContainer within sandbox \"4827ba46c955f9a72c216a91f91802fe553c7a915a062ccc22065d7139140410\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a\"" May 10 00:21:28.495079 containerd[1476]: time="2025-05-10T00:21:28.494884688Z" level=info msg="CreateContainer within sandbox \"8133a9657e73935a12db3280869e4f41e3045b6447d344b3a289edf97ba5a160\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ac4502e0c892a64b5a61a55396c5e34c2158d4ca9e6a5d90629e5642bb186c29\"" May 10 00:21:28.495437 containerd[1476]: time="2025-05-10T00:21:28.495372334Z" level=info msg="StartContainer for \"b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a\"" May 10 00:21:28.495568 containerd[1476]: time="2025-05-10T00:21:28.495545736Z" level=info msg="StartContainer for \"ac4502e0c892a64b5a61a55396c5e34c2158d4ca9e6a5d90629e5642bb186c29\"" May 10 00:21:28.502931 kubelet[2715]: W0510 00:21:28.502873 2715 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.204.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:28.503032 kubelet[2715]: E0510 00:21:28.502938 2715 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://91.107.204.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.204.139:6443: connect: connection refused May 10 00:21:28.516561 systemd[1]: Started cri-containerd-4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47.scope - libcontainer container 4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47. May 10 00:21:28.534551 systemd[1]: Started cri-containerd-b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a.scope - libcontainer container b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a. May 10 00:21:28.551520 systemd[1]: Started cri-containerd-ac4502e0c892a64b5a61a55396c5e34c2158d4ca9e6a5d90629e5642bb186c29.scope - libcontainer container ac4502e0c892a64b5a61a55396c5e34c2158d4ca9e6a5d90629e5642bb186c29. May 10 00:21:28.577562 sshd[2749]: Connection closed by authenticating user root 60.164.133.37 port 54314 [preauth] May 10 00:21:28.581927 systemd[1]: sshd@70-91.107.204.139:22-60.164.133.37:54314.service: Deactivated successfully. May 10 00:21:28.587314 containerd[1476]: time="2025-05-10T00:21:28.586860695Z" level=info msg="StartContainer for \"4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47\" returns successfully" May 10 00:21:28.617451 containerd[1476]: time="2025-05-10T00:21:28.617366215Z" level=info msg="StartContainer for \"ac4502e0c892a64b5a61a55396c5e34c2158d4ca9e6a5d90629e5642bb186c29\" returns successfully" May 10 00:21:28.623177 containerd[1476]: time="2025-05-10T00:21:28.623073562Z" level=info msg="StartContainer for \"b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a\" returns successfully" May 10 00:21:28.666167 kubelet[2715]: E0510 00:21:28.666112 2715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.204.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-2389c948d4?timeout=10s\": dial tcp 91.107.204.139:6443: connect: connection refused" interval="1.6s" May 10 00:21:28.777385 kubelet[2715]: I0510 00:21:28.775783 2715 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:28.780241 systemd[1]: Started sshd@71-91.107.204.139:22-60.164.133.37:55676.service - OpenSSH per-connection server daemon (60.164.133.37:55676). May 10 00:21:29.720817 sshd[2996]: Connection closed by authenticating user root 60.164.133.37 port 55676 [preauth] May 10 00:21:29.721749 systemd[1]: sshd@71-91.107.204.139:22-60.164.133.37:55676.service: Deactivated successfully. May 10 00:21:29.921608 systemd[1]: Started sshd@72-91.107.204.139:22-60.164.133.37:56888.service - OpenSSH per-connection server daemon (60.164.133.37:56888). May 10 00:21:30.647442 kubelet[2715]: E0510 00:21:30.647359 2715 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-2389c948d4\" not found" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:30.703004 kubelet[2715]: I0510 00:21:30.702905 2715 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:30.877055 sshd[3002]: Connection closed by authenticating user root 60.164.133.37 port 56888 [preauth] May 10 00:21:30.880043 systemd[1]: sshd@72-91.107.204.139:22-60.164.133.37:56888.service: Deactivated successfully. May 10 00:21:31.080213 systemd[1]: Started sshd@73-91.107.204.139:22-60.164.133.37:58318.service - OpenSSH per-connection server daemon (60.164.133.37:58318). May 10 00:21:31.247027 kubelet[2715]: I0510 00:21:31.246993 2715 apiserver.go:52] "Watching apiserver" May 10 00:21:31.262394 kubelet[2715]: I0510 00:21:31.262250 2715 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:21:32.032980 sshd[3007]: Connection closed by authenticating user root 60.164.133.37 port 58318 [preauth] May 10 00:21:32.034216 systemd[1]: sshd@73-91.107.204.139:22-60.164.133.37:58318.service: Deactivated successfully. May 10 00:21:32.335174 systemd[1]: Started sshd@74-91.107.204.139:22-60.164.133.37:59850.service - OpenSSH per-connection server daemon (60.164.133.37:59850). May 10 00:21:32.550803 systemd[1]: Reloading requested from client PID 3017 ('systemctl') (unit session-7.scope)... May 10 00:21:32.550824 systemd[1]: Reloading... May 10 00:21:32.666306 zram_generator::config[3057]: No configuration found. May 10 00:21:32.775003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:21:32.867169 systemd[1]: Reloading finished in 316 ms. May 10 00:21:32.919188 kubelet[2715]: I0510 00:21:32.919080 2715 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:21:32.919936 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:21:32.944599 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:21:32.944935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:21:32.944997 systemd[1]: kubelet.service: Consumed 1.017s CPU time, 114.3M memory peak, 0B memory swap peak. May 10 00:21:32.951030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 10 00:21:33.065151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 10 00:21:33.075725 (kubelet)[3104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 10 00:21:33.151349 kubelet[3104]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:21:33.154936 kubelet[3104]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:21:33.154936 kubelet[3104]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:21:33.154936 kubelet[3104]: I0510 00:21:33.152848 3104 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:21:33.158334 kubelet[3104]: I0510 00:21:33.158290 3104 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 00:21:33.158541 kubelet[3104]: I0510 00:21:33.158528 3104 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:21:33.158807 kubelet[3104]: I0510 00:21:33.158789 3104 server.go:927] "Client rotation is on, will bootstrap in background" May 10 00:21:33.162813 kubelet[3104]: I0510 00:21:33.162784 3104 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:21:33.169000 kubelet[3104]: I0510 00:21:33.168958 3104 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:21:33.178995 kubelet[3104]: I0510 00:21:33.178970 3104 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:21:33.179356 kubelet[3104]: I0510 00:21:33.179325 3104 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:21:33.179656 kubelet[3104]: I0510 00:21:33.179461 3104 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-2389c948d4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 00:21:33.179794 kubelet[3104]: I0510 00:21:33.179781 3104 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:21:33.179855 kubelet[3104]: I0510 00:21:33.179846 3104 container_manager_linux.go:301] "Creating device plugin manager" May 10 00:21:33.179948 kubelet[3104]: I0510 00:21:33.179936 3104 state_mem.go:36] "Initialized new in-memory state store" May 10 00:21:33.180160 kubelet[3104]: I0510 00:21:33.180144 3104 kubelet.go:400] "Attempting to sync node with API server" May 10 00:21:33.180251 kubelet[3104]: I0510 00:21:33.180240 3104 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:21:33.180356 kubelet[3104]: I0510 00:21:33.180341 3104 kubelet.go:312] "Adding apiserver pod source" May 10 00:21:33.180581 kubelet[3104]: I0510 00:21:33.180567 3104 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:21:33.183614 kubelet[3104]: I0510 00:21:33.183581 3104 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 10 00:21:33.185636 kubelet[3104]: I0510 00:21:33.185607 3104 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:21:33.187326 kubelet[3104]: I0510 00:21:33.186469 3104 server.go:1264] "Started kubelet" May 10 00:21:33.192475 kubelet[3104]: I0510 00:21:33.192456 3104 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:21:33.200177 kubelet[3104]: I0510 00:21:33.200130 3104 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:21:33.201246 kubelet[3104]: I0510 00:21:33.201213 3104 server.go:455] "Adding debug handlers to kubelet server" May 10 00:21:33.202670 kubelet[3104]: I0510 00:21:33.202624 3104 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:21:33.202934 kubelet[3104]: I0510 00:21:33.202919 3104 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:21:33.204903 kubelet[3104]: I0510 00:21:33.204882 3104 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 00:21:33.206448 kubelet[3104]: I0510 00:21:33.206416 3104 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 00:21:33.206658 kubelet[3104]: I0510 00:21:33.206647 3104 reconciler.go:26] "Reconciler: start to sync state" May 10 00:21:33.208223 kubelet[3104]: I0510 00:21:33.208189 3104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:21:33.209223 kubelet[3104]: I0510 00:21:33.209203 3104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:21:33.209353 kubelet[3104]: I0510 00:21:33.209342 3104 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:21:33.209718 kubelet[3104]: I0510 00:21:33.209409 3104 kubelet.go:2337] "Starting kubelet main sync loop" May 10 00:21:33.209718 kubelet[3104]: E0510 00:21:33.209471 3104 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:21:33.225488 kubelet[3104]: I0510 00:21:33.225419 3104 factory.go:221] Registration of the systemd container factory successfully May 10 00:21:33.225614 kubelet[3104]: I0510 00:21:33.225585 3104 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:21:33.231523 kubelet[3104]: I0510 00:21:33.231493 3104 factory.go:221] Registration of the containerd container factory successfully May 10 00:21:33.232999 kubelet[3104]: E0510 00:21:33.232975 3104 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:21:33.279850 kubelet[3104]: I0510 00:21:33.279825 3104 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:21:33.280357 kubelet[3104]: I0510 00:21:33.280079 3104 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:21:33.280357 kubelet[3104]: I0510 00:21:33.280111 3104 state_mem.go:36] "Initialized new in-memory state store" May 10 00:21:33.280357 kubelet[3104]: I0510 00:21:33.280260 3104 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:21:33.280357 kubelet[3104]: I0510 00:21:33.280270 3104 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:21:33.280813 kubelet[3104]: I0510 00:21:33.280289 3104 policy_none.go:49] "None policy: Start" May 10 00:21:33.281500 kubelet[3104]: I0510 00:21:33.281202 3104 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:21:33.281500 kubelet[3104]: I0510 00:21:33.281260 3104 state_mem.go:35] "Initializing new in-memory state store" May 10 00:21:33.281500 kubelet[3104]: I0510 00:21:33.281413 3104 state_mem.go:75] "Updated machine memory state" May 10 00:21:33.286794 kubelet[3104]: I0510 00:21:33.286767 3104 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:21:33.287124 kubelet[3104]: I0510 00:21:33.287079 3104 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:21:33.287409 kubelet[3104]: I0510 00:21:33.287392 3104 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:21:33.310126 kubelet[3104]: I0510 00:21:33.310058 3104 topology_manager.go:215] "Topology Admit Handler" podUID="371e9d14f51cb6001b93f65b4772119b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.310335 kubelet[3104]: I0510 00:21:33.310200 3104 topology_manager.go:215] "Topology Admit Handler" podUID="5ceaf1440399a212b1fa4ee4cf7ffa7e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.310335 kubelet[3104]: I0510 00:21:33.310244 3104 topology_manager.go:215] "Topology Admit Handler" podUID="f416adeb7985764aa50aa7aabad9c039" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.312831 kubelet[3104]: I0510 00:21:33.312514 3104 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:33.322473 kubelet[3104]: I0510 00:21:33.322322 3104 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:33.322473 kubelet[3104]: I0510 00:21:33.322441 3104 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509101 kubelet[3104]: I0510 00:21:33.508837 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f416adeb7985764aa50aa7aabad9c039-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-2389c948d4\" (UID: \"f416adeb7985764aa50aa7aabad9c039\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509101 kubelet[3104]: I0510 00:21:33.508950 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/371e9d14f51cb6001b93f65b4772119b-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-2389c948d4\" (UID: \"371e9d14f51cb6001b93f65b4772119b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509101 kubelet[3104]: I0510 00:21:33.508987 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/371e9d14f51cb6001b93f65b4772119b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-2389c948d4\" (UID: \"371e9d14f51cb6001b93f65b4772119b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509101 kubelet[3104]: I0510 00:21:33.509020 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509781 kubelet[3104]: I0510 00:21:33.509563 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509781 kubelet[3104]: I0510 00:21:33.509624 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509781 kubelet[3104]: I0510 00:21:33.509660 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/371e9d14f51cb6001b93f65b4772119b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-2389c948d4\" (UID: \"371e9d14f51cb6001b93f65b4772119b\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509781 kubelet[3104]: I0510 00:21:33.509690 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.509781 kubelet[3104]: I0510 00:21:33.509721 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ceaf1440399a212b1fa4ee4cf7ffa7e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-2389c948d4\" (UID: \"5ceaf1440399a212b1fa4ee4cf7ffa7e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" May 10 00:21:33.536176 sshd[3014]: Connection closed by authenticating user root 60.164.133.37 port 59850 [preauth] May 10 00:21:33.537750 systemd[1]: sshd@74-91.107.204.139:22-60.164.133.37:59850.service: Deactivated successfully. May 10 00:21:33.684878 systemd[1]: Started sshd@75-91.107.204.139:22-60.164.133.37:33268.service - OpenSSH per-connection server daemon (60.164.133.37:33268). May 10 00:21:34.181118 kubelet[3104]: I0510 00:21:34.181034 3104 apiserver.go:52] "Watching apiserver" May 10 00:21:34.207180 kubelet[3104]: I0510 00:21:34.207078 3104 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 00:21:34.281723 kubelet[3104]: E0510 00:21:34.279050 3104 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-2389c948d4\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-2389c948d4" May 10 00:21:34.335093 kubelet[3104]: I0510 00:21:34.334933 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-2389c948d4" podStartSLOduration=1.3349138489999999 podStartE2EDuration="1.334913849s" podCreationTimestamp="2025-05-10 00:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:21:34.304098537 +0000 UTC m=+1.223298132" watchObservedRunningTime="2025-05-10 00:21:34.334913849 +0000 UTC m=+1.254113444" May 10 00:21:34.356060 kubelet[3104]: I0510 00:21:34.355853 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-2389c948d4" podStartSLOduration=1.355834821 podStartE2EDuration="1.355834821s" podCreationTimestamp="2025-05-10 00:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:21:34.336094941 +0000 UTC m=+1.255294616" watchObservedRunningTime="2025-05-10 00:21:34.355834821 +0000 UTC m=+1.275034416" May 10 00:21:34.400481 kubelet[3104]: I0510 00:21:34.400311 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-2389c948d4" podStartSLOduration=1.40026827 podStartE2EDuration="1.40026827s" podCreationTimestamp="2025-05-10 00:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:21:34.356833871 +0000 UTC m=+1.276033466" watchObservedRunningTime="2025-05-10 00:21:34.40026827 +0000 UTC m=+1.319467865" May 10 00:21:34.631420 sshd[3144]: Connection closed by authenticating user root 60.164.133.37 port 33268 [preauth] May 10 00:21:34.633857 systemd[1]: sshd@75-91.107.204.139:22-60.164.133.37:33268.service: Deactivated successfully. May 10 00:21:34.846451 systemd[1]: Started sshd@76-91.107.204.139:22-60.164.133.37:34592.service - OpenSSH per-connection server daemon (60.164.133.37:34592). May 10 00:21:35.804882 sshd[3150]: Connection closed by authenticating user root 60.164.133.37 port 34592 [preauth] May 10 00:21:35.808815 systemd[1]: sshd@76-91.107.204.139:22-60.164.133.37:34592.service: Deactivated successfully. May 10 00:21:36.014560 systemd[1]: Started sshd@77-91.107.204.139:22-60.164.133.37:36058.service - OpenSSH per-connection server daemon (60.164.133.37:36058). May 10 00:21:36.977437 sshd[3167]: Connection closed by authenticating user root 60.164.133.37 port 36058 [preauth] May 10 00:21:36.980941 systemd[1]: sshd@77-91.107.204.139:22-60.164.133.37:36058.service: Deactivated successfully. May 10 00:21:37.180867 systemd[1]: Started sshd@78-91.107.204.139:22-60.164.133.37:37316.service - OpenSSH per-connection server daemon (60.164.133.37:37316). May 10 00:21:38.127228 sshd[3180]: Connection closed by authenticating user root 60.164.133.37 port 37316 [preauth] May 10 00:21:38.131560 systemd[1]: sshd@78-91.107.204.139:22-60.164.133.37:37316.service: Deactivated successfully. May 10 00:21:38.332640 systemd[1]: Started sshd@79-91.107.204.139:22-60.164.133.37:38956.service - OpenSSH per-connection server daemon (60.164.133.37:38956). May 10 00:21:38.564897 sudo[2064]: pam_unix(sudo:session): session closed for user root May 10 00:21:38.727558 sshd[2035]: pam_unix(sshd:session): session closed for user core May 10 00:21:38.733660 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. May 10 00:21:38.733837 systemd[1]: sshd@44-91.107.204.139:22-147.75.109.163:56110.service: Deactivated successfully. May 10 00:21:38.736749 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:21:38.737069 systemd[1]: session-7.scope: Consumed 8.745s CPU time, 186.0M memory peak, 0B memory swap peak. May 10 00:21:38.738991 systemd-logind[1458]: Removed session 7. May 10 00:21:39.281034 sshd[3185]: Connection closed by authenticating user root 60.164.133.37 port 38956 [preauth] May 10 00:21:39.283707 systemd[1]: sshd@79-91.107.204.139:22-60.164.133.37:38956.service: Deactivated successfully. May 10 00:21:39.485645 systemd[1]: Started sshd@80-91.107.204.139:22-60.164.133.37:40334.service - OpenSSH per-connection server daemon (60.164.133.37:40334). May 10 00:21:40.441701 sshd[3209]: Connection closed by authenticating user root 60.164.133.37 port 40334 [preauth] May 10 00:21:40.446543 systemd[1]: sshd@80-91.107.204.139:22-60.164.133.37:40334.service: Deactivated successfully. May 10 00:21:40.638548 systemd[1]: Started sshd@81-91.107.204.139:22-60.164.133.37:41658.service - OpenSSH per-connection server daemon (60.164.133.37:41658). May 10 00:21:41.601185 sshd[3214]: Connection closed by authenticating user root 60.164.133.37 port 41658 [preauth] May 10 00:21:41.606019 systemd[1]: sshd@81-91.107.204.139:22-60.164.133.37:41658.service: Deactivated successfully. May 10 00:21:41.804943 systemd[1]: Started sshd@82-91.107.204.139:22-60.164.133.37:42998.service - OpenSSH per-connection server daemon (60.164.133.37:42998). May 10 00:21:42.766044 sshd[3219]: Connection closed by authenticating user root 60.164.133.37 port 42998 [preauth] May 10 00:21:42.770205 systemd[1]: sshd@82-91.107.204.139:22-60.164.133.37:42998.service: Deactivated successfully. May 10 00:21:42.972759 systemd[1]: Started sshd@83-91.107.204.139:22-60.164.133.37:44450.service - OpenSSH per-connection server daemon (60.164.133.37:44450). May 10 00:21:43.918744 sshd[3224]: Connection closed by authenticating user root 60.164.133.37 port 44450 [preauth] May 10 00:21:43.921427 systemd[1]: sshd@83-91.107.204.139:22-60.164.133.37:44450.service: Deactivated successfully. May 10 00:21:44.125694 systemd[1]: Started sshd@84-91.107.204.139:22-60.164.133.37:45854.service - OpenSSH per-connection server daemon (60.164.133.37:45854). May 10 00:21:45.088198 sshd[3229]: Connection closed by authenticating user root 60.164.133.37 port 45854 [preauth] May 10 00:21:45.091390 systemd[1]: sshd@84-91.107.204.139:22-60.164.133.37:45854.service: Deactivated successfully. May 10 00:21:45.295347 systemd[1]: Started sshd@85-91.107.204.139:22-60.164.133.37:47338.service - OpenSSH per-connection server daemon (60.164.133.37:47338). May 10 00:21:46.243076 sshd[3234]: Connection closed by authenticating user root 60.164.133.37 port 47338 [preauth] May 10 00:21:46.247266 systemd[1]: sshd@85-91.107.204.139:22-60.164.133.37:47338.service: Deactivated successfully. May 10 00:21:46.451892 systemd[1]: Started sshd@86-91.107.204.139:22-60.164.133.37:48692.service - OpenSSH per-connection server daemon (60.164.133.37:48692). May 10 00:21:46.897698 kubelet[3104]: I0510 00:21:46.897655 3104 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:21:46.899579 containerd[1476]: time="2025-05-10T00:21:46.898233797Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:21:46.901051 kubelet[3104]: I0510 00:21:46.900083 3104 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:21:47.402329 sshd[3239]: Connection closed by authenticating user root 60.164.133.37 port 48692 [preauth] May 10 00:21:47.405095 systemd[1]: sshd@86-91.107.204.139:22-60.164.133.37:48692.service: Deactivated successfully. May 10 00:21:47.596816 systemd[1]: Started sshd@87-91.107.204.139:22-60.164.133.37:50068.service - OpenSSH per-connection server daemon (60.164.133.37:50068). May 10 00:21:47.756031 kubelet[3104]: I0510 00:21:47.755779 3104 topology_manager.go:215] "Topology Admit Handler" podUID="0dc9798d-b098-4215-82eb-1c0fc82150d1" podNamespace="kube-system" podName="kube-proxy-j28w9" May 10 00:21:47.771357 systemd[1]: Created slice kubepods-besteffort-pod0dc9798d_b098_4215_82eb_1c0fc82150d1.slice - libcontainer container kubepods-besteffort-pod0dc9798d_b098_4215_82eb_1c0fc82150d1.slice. May 10 00:21:47.899326 kubelet[3104]: I0510 00:21:47.898598 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0dc9798d-b098-4215-82eb-1c0fc82150d1-kube-proxy\") pod \"kube-proxy-j28w9\" (UID: \"0dc9798d-b098-4215-82eb-1c0fc82150d1\") " pod="kube-system/kube-proxy-j28w9" May 10 00:21:47.899326 kubelet[3104]: I0510 00:21:47.898654 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dc9798d-b098-4215-82eb-1c0fc82150d1-xtables-lock\") pod \"kube-proxy-j28w9\" (UID: \"0dc9798d-b098-4215-82eb-1c0fc82150d1\") " pod="kube-system/kube-proxy-j28w9" May 10 00:21:47.899326 kubelet[3104]: I0510 00:21:47.898675 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dc9798d-b098-4215-82eb-1c0fc82150d1-lib-modules\") pod \"kube-proxy-j28w9\" (UID: \"0dc9798d-b098-4215-82eb-1c0fc82150d1\") " pod="kube-system/kube-proxy-j28w9" May 10 00:21:47.899326 kubelet[3104]: I0510 00:21:47.898692 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxr7j\" (UniqueName: \"kubernetes.io/projected/0dc9798d-b098-4215-82eb-1c0fc82150d1-kube-api-access-cxr7j\") pod \"kube-proxy-j28w9\" (UID: \"0dc9798d-b098-4215-82eb-1c0fc82150d1\") " pod="kube-system/kube-proxy-j28w9" May 10 00:21:47.904544 kubelet[3104]: I0510 00:21:47.902766 3104 topology_manager.go:215] "Topology Admit Handler" podUID="09aa72e2-eb45-406d-83de-db926e8bf680" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-kfvq5" May 10 00:21:47.915573 systemd[1]: Created slice kubepods-besteffort-pod09aa72e2_eb45_406d_83de_db926e8bf680.slice - libcontainer container kubepods-besteffort-pod09aa72e2_eb45_406d_83de_db926e8bf680.slice. May 10 00:21:48.000402 kubelet[3104]: I0510 00:21:47.999571 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/09aa72e2-eb45-406d-83de-db926e8bf680-var-lib-calico\") pod \"tigera-operator-797db67f8-kfvq5\" (UID: \"09aa72e2-eb45-406d-83de-db926e8bf680\") " pod="tigera-operator/tigera-operator-797db67f8-kfvq5" May 10 00:21:48.000402 kubelet[3104]: I0510 00:21:47.999649 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq7tk\" (UniqueName: \"kubernetes.io/projected/09aa72e2-eb45-406d-83de-db926e8bf680-kube-api-access-jq7tk\") pod \"tigera-operator-797db67f8-kfvq5\" (UID: \"09aa72e2-eb45-406d-83de-db926e8bf680\") " pod="tigera-operator/tigera-operator-797db67f8-kfvq5" May 10 00:21:48.084384 containerd[1476]: time="2025-05-10T00:21:48.084283554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j28w9,Uid:0dc9798d-b098-4215-82eb-1c0fc82150d1,Namespace:kube-system,Attempt:0,}" May 10 00:21:48.120189 containerd[1476]: time="2025-05-10T00:21:48.120095972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:21:48.120726 containerd[1476]: time="2025-05-10T00:21:48.120674256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:21:48.120790 containerd[1476]: time="2025-05-10T00:21:48.120733137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:48.120862 containerd[1476]: time="2025-05-10T00:21:48.120830817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:48.145552 systemd[1]: Started cri-containerd-b83faf215fdc341bf5bc5ce6b53af731c26df79bca3a5bc305ff55ef8ac1ca0f.scope - libcontainer container b83faf215fdc341bf5bc5ce6b53af731c26df79bca3a5bc305ff55ef8ac1ca0f. May 10 00:21:48.175730 containerd[1476]: time="2025-05-10T00:21:48.175594372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j28w9,Uid:0dc9798d-b098-4215-82eb-1c0fc82150d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b83faf215fdc341bf5bc5ce6b53af731c26df79bca3a5bc305ff55ef8ac1ca0f\"" May 10 00:21:48.179942 containerd[1476]: time="2025-05-10T00:21:48.179891923Z" level=info msg="CreateContainer within sandbox \"b83faf215fdc341bf5bc5ce6b53af731c26df79bca3a5bc305ff55ef8ac1ca0f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:21:48.195138 containerd[1476]: time="2025-05-10T00:21:48.195078513Z" level=info msg="CreateContainer within sandbox \"b83faf215fdc341bf5bc5ce6b53af731c26df79bca3a5bc305ff55ef8ac1ca0f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ccc096045300c7650bde4e63424d62b82683845a211d782c25553f47a9f6c4a4\"" May 10 00:21:48.197637 containerd[1476]: time="2025-05-10T00:21:48.196407962Z" level=info msg="StartContainer for \"ccc096045300c7650bde4e63424d62b82683845a211d782c25553f47a9f6c4a4\"" May 10 00:21:48.224707 containerd[1476]: time="2025-05-10T00:21:48.224662726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-kfvq5,Uid:09aa72e2-eb45-406d-83de-db926e8bf680,Namespace:tigera-operator,Attempt:0,}" May 10 00:21:48.229544 systemd[1]: Started cri-containerd-ccc096045300c7650bde4e63424d62b82683845a211d782c25553f47a9f6c4a4.scope - libcontainer container ccc096045300c7650bde4e63424d62b82683845a211d782c25553f47a9f6c4a4. May 10 00:21:48.260041 containerd[1476]: time="2025-05-10T00:21:48.259934861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:21:48.260041 containerd[1476]: time="2025-05-10T00:21:48.259994661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:21:48.260241 containerd[1476]: time="2025-05-10T00:21:48.260010701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:48.261449 containerd[1476]: time="2025-05-10T00:21:48.261108669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:48.269714 containerd[1476]: time="2025-05-10T00:21:48.269588810Z" level=info msg="StartContainer for \"ccc096045300c7650bde4e63424d62b82683845a211d782c25553f47a9f6c4a4\" returns successfully" May 10 00:21:48.281533 systemd[1]: Started cri-containerd-9c3db3c4be8014d45efec8ba0c065dc0f669dd6c716f6c2895576b4bc5f5f47f.scope - libcontainer container 9c3db3c4be8014d45efec8ba0c065dc0f669dd6c716f6c2895576b4bc5f5f47f. May 10 00:21:48.319741 kubelet[3104]: I0510 00:21:48.319670 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j28w9" podStartSLOduration=1.319643691 podStartE2EDuration="1.319643691s" podCreationTimestamp="2025-05-10 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:21:48.319148728 +0000 UTC m=+15.238348323" watchObservedRunningTime="2025-05-10 00:21:48.319643691 +0000 UTC m=+15.238843286" May 10 00:21:48.343322 containerd[1476]: time="2025-05-10T00:21:48.342758418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-kfvq5,Uid:09aa72e2-eb45-406d-83de-db926e8bf680,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9c3db3c4be8014d45efec8ba0c065dc0f669dd6c716f6c2895576b4bc5f5f47f\"" May 10 00:21:48.347204 containerd[1476]: time="2025-05-10T00:21:48.346972288Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 10 00:21:48.565094 sshd[3244]: Connection closed by authenticating user root 60.164.133.37 port 50068 [preauth] May 10 00:21:48.565924 systemd[1]: sshd@87-91.107.204.139:22-60.164.133.37:50068.service: Deactivated successfully. May 10 00:21:48.767864 systemd[1]: Started sshd@88-91.107.204.139:22-60.164.133.37:51388.service - OpenSSH per-connection server daemon (60.164.133.37:51388). May 10 00:21:49.706352 sshd[3489]: Connection closed by authenticating user root 60.164.133.37 port 51388 [preauth] May 10 00:21:49.709698 systemd[1]: sshd@88-91.107.204.139:22-60.164.133.37:51388.service: Deactivated successfully. May 10 00:21:49.908777 systemd[1]: Started sshd@89-91.107.204.139:22-60.164.133.37:52890.service - OpenSSH per-connection server daemon (60.164.133.37:52890). May 10 00:21:50.238151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597020495.mount: Deactivated successfully. May 10 00:21:50.557135 containerd[1476]: time="2025-05-10T00:21:50.556956930Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:50.558686 containerd[1476]: time="2025-05-10T00:21:50.558628381Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 10 00:21:50.559450 containerd[1476]: time="2025-05-10T00:21:50.559403627Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:50.563211 containerd[1476]: time="2025-05-10T00:21:50.563138612Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:50.564515 containerd[1476]: time="2025-05-10T00:21:50.564432701Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.217412492s" May 10 00:21:50.564515 containerd[1476]: time="2025-05-10T00:21:50.564476822Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 10 00:21:50.568638 containerd[1476]: time="2025-05-10T00:21:50.567381762Z" level=info msg="CreateContainer within sandbox \"9c3db3c4be8014d45efec8ba0c065dc0f669dd6c716f6c2895576b4bc5f5f47f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 10 00:21:50.586042 containerd[1476]: time="2025-05-10T00:21:50.585973490Z" level=info msg="CreateContainer within sandbox \"9c3db3c4be8014d45efec8ba0c065dc0f669dd6c716f6c2895576b4bc5f5f47f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8\"" May 10 00:21:50.587964 containerd[1476]: time="2025-05-10T00:21:50.586839456Z" level=info msg="StartContainer for \"38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8\"" May 10 00:21:50.616530 systemd[1]: Started cri-containerd-38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8.scope - libcontainer container 38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8. May 10 00:21:50.648625 containerd[1476]: time="2025-05-10T00:21:50.648576441Z" level=info msg="StartContainer for \"38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8\" returns successfully" May 10 00:21:50.850372 sshd[3494]: Connection closed by authenticating user root 60.164.133.37 port 52890 [preauth] May 10 00:21:50.852975 systemd[1]: sshd@89-91.107.204.139:22-60.164.133.37:52890.service: Deactivated successfully. May 10 00:21:51.053861 systemd[1]: Started sshd@90-91.107.204.139:22-60.164.133.37:54316.service - OpenSSH per-connection server daemon (60.164.133.37:54316). May 10 00:21:51.336385 kubelet[3104]: I0510 00:21:51.336314 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-kfvq5" podStartSLOduration=2.116503821 podStartE2EDuration="4.336277531s" podCreationTimestamp="2025-05-10 00:21:47 +0000 UTC" firstStartedPulling="2025-05-10 00:21:48.345601638 +0000 UTC m=+15.264801233" lastFinishedPulling="2025-05-10 00:21:50.565375348 +0000 UTC m=+17.484574943" observedRunningTime="2025-05-10 00:21:51.334756801 +0000 UTC m=+18.253956436" watchObservedRunningTime="2025-05-10 00:21:51.336277531 +0000 UTC m=+18.255477126" May 10 00:21:52.008712 sshd[3545]: Connection closed by authenticating user root 60.164.133.37 port 54316 [preauth] May 10 00:21:52.012736 systemd[1]: sshd@90-91.107.204.139:22-60.164.133.37:54316.service: Deactivated successfully. May 10 00:21:52.208855 systemd[1]: Started sshd@91-91.107.204.139:22-60.164.133.37:55766.service - OpenSSH per-connection server daemon (60.164.133.37:55766). May 10 00:21:53.159034 sshd[3550]: Connection closed by authenticating user root 60.164.133.37 port 55766 [preauth] May 10 00:21:53.161116 systemd[1]: sshd@91-91.107.204.139:22-60.164.133.37:55766.service: Deactivated successfully. May 10 00:21:53.367588 systemd[1]: Started sshd@92-91.107.204.139:22-60.164.133.37:57068.service - OpenSSH per-connection server daemon (60.164.133.37:57068). May 10 00:21:54.319328 sshd[3558]: Connection closed by authenticating user root 60.164.133.37 port 57068 [preauth] May 10 00:21:54.322571 systemd[1]: sshd@92-91.107.204.139:22-60.164.133.37:57068.service: Deactivated successfully. May 10 00:21:54.526596 systemd[1]: Started sshd@93-91.107.204.139:22-60.164.133.37:58430.service - OpenSSH per-connection server daemon (60.164.133.37:58430). May 10 00:21:54.615620 kubelet[3104]: I0510 00:21:54.615102 3104 topology_manager.go:215] "Topology Admit Handler" podUID="d14df19d-28f4-4031-8acf-6b9943c29fb8" podNamespace="calico-system" podName="calico-typha-6c56f48f8b-r62vc" May 10 00:21:54.624841 systemd[1]: Created slice kubepods-besteffort-podd14df19d_28f4_4031_8acf_6b9943c29fb8.slice - libcontainer container kubepods-besteffort-podd14df19d_28f4_4031_8acf_6b9943c29fb8.slice. May 10 00:21:54.644960 kubelet[3104]: I0510 00:21:54.644915 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d14df19d-28f4-4031-8acf-6b9943c29fb8-tigera-ca-bundle\") pod \"calico-typha-6c56f48f8b-r62vc\" (UID: \"d14df19d-28f4-4031-8acf-6b9943c29fb8\") " pod="calico-system/calico-typha-6c56f48f8b-r62vc" May 10 00:21:54.645352 kubelet[3104]: I0510 00:21:54.645140 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d14df19d-28f4-4031-8acf-6b9943c29fb8-typha-certs\") pod \"calico-typha-6c56f48f8b-r62vc\" (UID: \"d14df19d-28f4-4031-8acf-6b9943c29fb8\") " pod="calico-system/calico-typha-6c56f48f8b-r62vc" May 10 00:21:54.645352 kubelet[3104]: I0510 00:21:54.645169 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxg8\" (UniqueName: \"kubernetes.io/projected/d14df19d-28f4-4031-8acf-6b9943c29fb8-kube-api-access-bnxg8\") pod \"calico-typha-6c56f48f8b-r62vc\" (UID: \"d14df19d-28f4-4031-8acf-6b9943c29fb8\") " pod="calico-system/calico-typha-6c56f48f8b-r62vc" May 10 00:21:54.723657 kubelet[3104]: I0510 00:21:54.722538 3104 topology_manager.go:215] "Topology Admit Handler" podUID="2922fc29-bd9c-4f44-bea3-98ee1c2da728" podNamespace="calico-system" podName="calico-node-xqr4w" May 10 00:21:54.731999 systemd[1]: Created slice kubepods-besteffort-pod2922fc29_bd9c_4f44_bea3_98ee1c2da728.slice - libcontainer container kubepods-besteffort-pod2922fc29_bd9c_4f44_bea3_98ee1c2da728.slice. May 10 00:21:54.746216 kubelet[3104]: I0510 00:21:54.746142 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-flexvol-driver-host\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746216 kubelet[3104]: I0510 00:21:54.746188 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-policysync\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746552 kubelet[3104]: I0510 00:21:54.746313 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-var-run-calico\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746552 kubelet[3104]: I0510 00:21:54.746357 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-cni-bin-dir\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746552 kubelet[3104]: I0510 00:21:54.746400 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-var-lib-calico\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746552 kubelet[3104]: I0510 00:21:54.746421 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-cni-log-dir\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746552 kubelet[3104]: I0510 00:21:54.746438 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2922fc29-bd9c-4f44-bea3-98ee1c2da728-tigera-ca-bundle\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746676 kubelet[3104]: I0510 00:21:54.746454 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-cni-net-dir\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746676 kubelet[3104]: I0510 00:21:54.746472 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmkw\" (UniqueName: \"kubernetes.io/projected/2922fc29-bd9c-4f44-bea3-98ee1c2da728-kube-api-access-djmkw\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746676 kubelet[3104]: I0510 00:21:54.746543 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-lib-modules\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746676 kubelet[3104]: I0510 00:21:54.746564 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2922fc29-bd9c-4f44-bea3-98ee1c2da728-node-certs\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.746676 kubelet[3104]: I0510 00:21:54.746604 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2922fc29-bd9c-4f44-bea3-98ee1c2da728-xtables-lock\") pod \"calico-node-xqr4w\" (UID: \"2922fc29-bd9c-4f44-bea3-98ee1c2da728\") " pod="calico-system/calico-node-xqr4w" May 10 00:21:54.849329 kubelet[3104]: E0510 00:21:54.849186 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.849329 kubelet[3104]: W0510 00:21:54.849213 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.849329 kubelet[3104]: E0510 00:21:54.849232 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.849741 kubelet[3104]: E0510 00:21:54.849715 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.849805 kubelet[3104]: W0510 00:21:54.849746 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.849805 kubelet[3104]: E0510 00:21:54.849763 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.849977 kubelet[3104]: E0510 00:21:54.849964 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.849977 kubelet[3104]: W0510 00:21:54.849976 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.850699 kubelet[3104]: E0510 00:21:54.849992 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.851418 kubelet[3104]: E0510 00:21:54.851384 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.851418 kubelet[3104]: W0510 00:21:54.851411 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.851549 kubelet[3104]: E0510 00:21:54.851440 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.851916 kubelet[3104]: E0510 00:21:54.851896 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.851916 kubelet[3104]: W0510 00:21:54.851911 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.852372 kubelet[3104]: E0510 00:21:54.852343 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.852796 kubelet[3104]: E0510 00:21:54.852774 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.852796 kubelet[3104]: W0510 00:21:54.852795 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.853216 kubelet[3104]: E0510 00:21:54.852994 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.853216 kubelet[3104]: W0510 00:21:54.853008 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.853216 kubelet[3104]: E0510 00:21:54.853039 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.853216 kubelet[3104]: E0510 00:21:54.853063 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.855472 kubelet[3104]: I0510 00:21:54.853955 3104 topology_manager.go:215] "Topology Admit Handler" podUID="9fe5f5d0-455a-4b01-9791-41d2483185a9" podNamespace="calico-system" podName="csi-node-driver-55f69" May 10 00:21:54.855472 kubelet[3104]: E0510 00:21:54.854219 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-55f69" podUID="9fe5f5d0-455a-4b01-9791-41d2483185a9" May 10 00:21:54.855771 kubelet[3104]: E0510 00:21:54.855749 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.855771 kubelet[3104]: W0510 00:21:54.855767 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.855921 kubelet[3104]: E0510 00:21:54.855786 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.856366 kubelet[3104]: E0510 00:21:54.856341 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.856447 kubelet[3104]: W0510 00:21:54.856365 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.856447 kubelet[3104]: E0510 00:21:54.856397 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.856814 kubelet[3104]: E0510 00:21:54.856794 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.856814 kubelet[3104]: W0510 00:21:54.856810 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.858377 kubelet[3104]: E0510 00:21:54.858339 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.858899 kubelet[3104]: E0510 00:21:54.858873 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.858899 kubelet[3104]: W0510 00:21:54.858888 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.858899 kubelet[3104]: E0510 00:21:54.858900 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.859776 kubelet[3104]: E0510 00:21:54.859752 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.859776 kubelet[3104]: W0510 00:21:54.859774 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.859776 kubelet[3104]: E0510 00:21:54.859787 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.860058 kubelet[3104]: E0510 00:21:54.860035 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.860058 kubelet[3104]: W0510 00:21:54.860052 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.860129 kubelet[3104]: E0510 00:21:54.860062 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.871665 kubelet[3104]: E0510 00:21:54.871411 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.871665 kubelet[3104]: W0510 00:21:54.871438 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.871665 kubelet[3104]: E0510 00:21:54.871454 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.887538 kubelet[3104]: E0510 00:21:54.887365 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.887538 kubelet[3104]: W0510 00:21:54.887390 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.887538 kubelet[3104]: E0510 00:21:54.887411 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.933028 containerd[1476]: time="2025-05-10T00:21:54.932987225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c56f48f8b-r62vc,Uid:d14df19d-28f4-4031-8acf-6b9943c29fb8,Namespace:calico-system,Attempt:0,}" May 10 00:21:54.942220 kubelet[3104]: E0510 00:21:54.942123 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.942220 kubelet[3104]: W0510 00:21:54.942146 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.942220 kubelet[3104]: E0510 00:21:54.942167 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.944888 kubelet[3104]: E0510 00:21:54.944775 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.944888 kubelet[3104]: W0510 00:21:54.944793 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.944888 kubelet[3104]: E0510 00:21:54.944810 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.945182 kubelet[3104]: E0510 00:21:54.945109 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.945182 kubelet[3104]: W0510 00:21:54.945120 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.945182 kubelet[3104]: E0510 00:21:54.945131 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.945705 kubelet[3104]: E0510 00:21:54.945548 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.945705 kubelet[3104]: W0510 00:21:54.945560 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.945705 kubelet[3104]: E0510 00:21:54.945572 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.945990 kubelet[3104]: E0510 00:21:54.945850 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.945990 kubelet[3104]: W0510 00:21:54.945861 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.945990 kubelet[3104]: E0510 00:21:54.945871 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.946180 kubelet[3104]: E0510 00:21:54.946126 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.946180 kubelet[3104]: W0510 00:21:54.946137 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.946180 kubelet[3104]: E0510 00:21:54.946146 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.946535 kubelet[3104]: E0510 00:21:54.946425 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.946535 kubelet[3104]: W0510 00:21:54.946435 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.946535 kubelet[3104]: E0510 00:21:54.946445 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.946850 kubelet[3104]: E0510 00:21:54.946763 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.946850 kubelet[3104]: W0510 00:21:54.946774 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.946850 kubelet[3104]: E0510 00:21:54.946784 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.947181 kubelet[3104]: E0510 00:21:54.947088 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.947181 kubelet[3104]: W0510 00:21:54.947099 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.947181 kubelet[3104]: E0510 00:21:54.947109 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.947531 kubelet[3104]: E0510 00:21:54.947431 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.947531 kubelet[3104]: W0510 00:21:54.947443 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.947531 kubelet[3104]: E0510 00:21:54.947453 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.948083 kubelet[3104]: E0510 00:21:54.947984 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.948316 kubelet[3104]: W0510 00:21:54.948147 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.948316 kubelet[3104]: E0510 00:21:54.948165 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.949196 kubelet[3104]: E0510 00:21:54.949178 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.949659 kubelet[3104]: W0510 00:21:54.949372 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.949659 kubelet[3104]: E0510 00:21:54.949392 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.950471 kubelet[3104]: E0510 00:21:54.950233 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.950471 kubelet[3104]: W0510 00:21:54.950246 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.950471 kubelet[3104]: E0510 00:21:54.950258 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.952001 kubelet[3104]: E0510 00:21:54.951157 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.952001 kubelet[3104]: W0510 00:21:54.951174 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.952001 kubelet[3104]: E0510 00:21:54.951186 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.952479 kubelet[3104]: E0510 00:21:54.952267 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.952479 kubelet[3104]: W0510 00:21:54.952280 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.952479 kubelet[3104]: E0510 00:21:54.952343 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.952810 kubelet[3104]: E0510 00:21:54.952705 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.952810 kubelet[3104]: W0510 00:21:54.952717 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.952810 kubelet[3104]: E0510 00:21:54.952728 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.953519 kubelet[3104]: E0510 00:21:54.953343 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.953519 kubelet[3104]: W0510 00:21:54.953357 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.953519 kubelet[3104]: E0510 00:21:54.953370 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.954654 kubelet[3104]: E0510 00:21:54.954548 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.954654 kubelet[3104]: W0510 00:21:54.954565 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.954654 kubelet[3104]: E0510 00:21:54.954579 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.955541 kubelet[3104]: E0510 00:21:54.954883 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.955541 kubelet[3104]: W0510 00:21:54.954894 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.955541 kubelet[3104]: E0510 00:21:54.954905 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.955908 kubelet[3104]: E0510 00:21:54.955793 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.955908 kubelet[3104]: W0510 00:21:54.955807 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.955908 kubelet[3104]: E0510 00:21:54.955824 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.956915 kubelet[3104]: E0510 00:21:54.956823 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.956915 kubelet[3104]: W0510 00:21:54.956843 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.956915 kubelet[3104]: E0510 00:21:54.956855 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.956915 kubelet[3104]: I0510 00:21:54.956884 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9fe5f5d0-455a-4b01-9791-41d2483185a9-varrun\") pod \"csi-node-driver-55f69\" (UID: \"9fe5f5d0-455a-4b01-9791-41d2483185a9\") " pod="calico-system/csi-node-driver-55f69" May 10 00:21:54.957458 kubelet[3104]: E0510 00:21:54.957426 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.957458 kubelet[3104]: W0510 00:21:54.957446 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.957458 kubelet[3104]: E0510 00:21:54.957466 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.958561 kubelet[3104]: E0510 00:21:54.958542 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.958561 kubelet[3104]: W0510 00:21:54.958557 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.958739 kubelet[3104]: E0510 00:21:54.958575 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.959546 kubelet[3104]: E0510 00:21:54.959459 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.959546 kubelet[3104]: W0510 00:21:54.959476 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.959546 kubelet[3104]: E0510 00:21:54.959536 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.959728 kubelet[3104]: I0510 00:21:54.959567 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9fe5f5d0-455a-4b01-9791-41d2483185a9-registration-dir\") pod \"csi-node-driver-55f69\" (UID: \"9fe5f5d0-455a-4b01-9791-41d2483185a9\") " pod="calico-system/csi-node-driver-55f69" May 10 00:21:54.960250 kubelet[3104]: E0510 00:21:54.960228 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.960250 kubelet[3104]: W0510 00:21:54.960245 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.960250 kubelet[3104]: E0510 00:21:54.960263 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.960448 kubelet[3104]: I0510 00:21:54.960282 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fe5f5d0-455a-4b01-9791-41d2483185a9-kubelet-dir\") pod \"csi-node-driver-55f69\" (UID: \"9fe5f5d0-455a-4b01-9791-41d2483185a9\") " pod="calico-system/csi-node-driver-55f69" May 10 00:21:54.964428 kubelet[3104]: E0510 00:21:54.964400 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.964428 kubelet[3104]: W0510 00:21:54.964421 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.965055 kubelet[3104]: E0510 00:21:54.964442 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.965055 kubelet[3104]: I0510 00:21:54.964467 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9fe5f5d0-455a-4b01-9791-41d2483185a9-socket-dir\") pod \"csi-node-driver-55f69\" (UID: \"9fe5f5d0-455a-4b01-9791-41d2483185a9\") " pod="calico-system/csi-node-driver-55f69" May 10 00:21:54.966068 kubelet[3104]: E0510 00:21:54.965943 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.968282 kubelet[3104]: W0510 00:21:54.967346 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.968282 kubelet[3104]: E0510 00:21:54.967761 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.968583 kubelet[3104]: E0510 00:21:54.968567 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.969315 kubelet[3104]: W0510 00:21:54.968814 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.969315 kubelet[3104]: E0510 00:21:54.968838 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.970388 kubelet[3104]: E0510 00:21:54.969752 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.970388 kubelet[3104]: W0510 00:21:54.969764 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.970388 kubelet[3104]: E0510 00:21:54.969782 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.972437 kubelet[3104]: E0510 00:21:54.972229 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.972437 kubelet[3104]: W0510 00:21:54.972244 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.972437 kubelet[3104]: E0510 00:21:54.972308 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.973605 kubelet[3104]: E0510 00:21:54.973459 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.973605 kubelet[3104]: W0510 00:21:54.973477 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.973785 kubelet[3104]: E0510 00:21:54.973769 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.973937 kubelet[3104]: I0510 00:21:54.973855 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h7pq\" (UniqueName: \"kubernetes.io/projected/9fe5f5d0-455a-4b01-9791-41d2483185a9-kube-api-access-2h7pq\") pod \"csi-node-driver-55f69\" (UID: \"9fe5f5d0-455a-4b01-9791-41d2483185a9\") " pod="calico-system/csi-node-driver-55f69" May 10 00:21:54.974702 kubelet[3104]: E0510 00:21:54.974026 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.974702 kubelet[3104]: W0510 00:21:54.974037 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.974702 kubelet[3104]: E0510 00:21:54.974060 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.975129 kubelet[3104]: E0510 00:21:54.975029 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.976408 kubelet[3104]: W0510 00:21:54.976336 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.976408 kubelet[3104]: E0510 00:21:54.976381 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.977347 kubelet[3104]: E0510 00:21:54.976718 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.977347 kubelet[3104]: W0510 00:21:54.976738 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.977347 kubelet[3104]: E0510 00:21:54.976751 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.977347 kubelet[3104]: E0510 00:21:54.976911 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:54.977347 kubelet[3104]: W0510 00:21:54.976920 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:54.977347 kubelet[3104]: E0510 00:21:54.976930 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:54.986468 containerd[1476]: time="2025-05-10T00:21:54.984756232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:21:54.986468 containerd[1476]: time="2025-05-10T00:21:54.985462236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:21:54.986468 containerd[1476]: time="2025-05-10T00:21:54.985477277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:54.986468 containerd[1476]: time="2025-05-10T00:21:54.985589517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:55.018525 systemd[1]: Started cri-containerd-829de31f6dbcac9785c3635a00ef0acc28e72694701f9008bd46378c5d8ac5d7.scope - libcontainer container 829de31f6dbcac9785c3635a00ef0acc28e72694701f9008bd46378c5d8ac5d7. May 10 00:21:55.039596 containerd[1476]: time="2025-05-10T00:21:55.039546973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xqr4w,Uid:2922fc29-bd9c-4f44-bea3-98ee1c2da728,Namespace:calico-system,Attempt:0,}" May 10 00:21:55.075500 kubelet[3104]: E0510 00:21:55.075236 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.075500 kubelet[3104]: W0510 00:21:55.075339 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.075500 kubelet[3104]: E0510 00:21:55.075358 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.076869 kubelet[3104]: E0510 00:21:55.076714 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.076869 kubelet[3104]: W0510 00:21:55.076731 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.076869 kubelet[3104]: E0510 00:21:55.076795 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.077125 kubelet[3104]: E0510 00:21:55.077068 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.077125 kubelet[3104]: W0510 00:21:55.077124 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.077410 kubelet[3104]: E0510 00:21:55.077146 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.077572 kubelet[3104]: E0510 00:21:55.077414 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.077572 kubelet[3104]: W0510 00:21:55.077426 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.077572 kubelet[3104]: E0510 00:21:55.077445 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.079084 kubelet[3104]: E0510 00:21:55.079062 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.079166 kubelet[3104]: W0510 00:21:55.079082 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.079166 kubelet[3104]: E0510 00:21:55.079121 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.080111 kubelet[3104]: E0510 00:21:55.079728 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.080111 kubelet[3104]: W0510 00:21:55.079741 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.080111 kubelet[3104]: E0510 00:21:55.079755 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.080336 kubelet[3104]: E0510 00:21:55.080213 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.080336 kubelet[3104]: W0510 00:21:55.080228 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.080951 kubelet[3104]: E0510 00:21:55.080671 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.080951 kubelet[3104]: W0510 00:21:55.080689 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.081671 kubelet[3104]: E0510 00:21:55.081166 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.081671 kubelet[3104]: E0510 00:21:55.081322 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.082062 kubelet[3104]: E0510 00:21:55.082042 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.082062 kubelet[3104]: W0510 00:21:55.082057 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.082154 kubelet[3104]: E0510 00:21:55.082070 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.082590 kubelet[3104]: E0510 00:21:55.082564 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.082915 kubelet[3104]: W0510 00:21:55.082888 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.083364 kubelet[3104]: E0510 00:21:55.082940 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.083895 kubelet[3104]: E0510 00:21:55.083707 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.083895 kubelet[3104]: W0510 00:21:55.083721 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.083895 kubelet[3104]: E0510 00:21:55.083739 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.085661 kubelet[3104]: E0510 00:21:55.085156 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.085661 kubelet[3104]: W0510 00:21:55.085172 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.085661 kubelet[3104]: E0510 00:21:55.085241 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.086000 kubelet[3104]: E0510 00:21:55.085882 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.086000 kubelet[3104]: W0510 00:21:55.085905 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.086000 kubelet[3104]: E0510 00:21:55.085937 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.086368 containerd[1476]: time="2025-05-10T00:21:55.086105301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:21:55.086839 kubelet[3104]: E0510 00:21:55.086568 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.086839 kubelet[3104]: W0510 00:21:55.086585 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.086839 kubelet[3104]: E0510 00:21:55.086617 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.087213 containerd[1476]: time="2025-05-10T00:21:55.086670144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:21:55.087213 containerd[1476]: time="2025-05-10T00:21:55.086786585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:55.087794 containerd[1476]: time="2025-05-10T00:21:55.087390749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:21:55.088350 kubelet[3104]: E0510 00:21:55.088169 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.088350 kubelet[3104]: W0510 00:21:55.088183 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.088350 kubelet[3104]: E0510 00:21:55.088217 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.089370 kubelet[3104]: E0510 00:21:55.089233 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.089370 kubelet[3104]: W0510 00:21:55.089251 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.089370 kubelet[3104]: E0510 00:21:55.089283 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.090008 kubelet[3104]: E0510 00:21:55.089850 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.090008 kubelet[3104]: W0510 00:21:55.089886 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.090008 kubelet[3104]: E0510 00:21:55.089962 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.091713 kubelet[3104]: E0510 00:21:55.091556 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.091713 kubelet[3104]: W0510 00:21:55.091608 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.091713 kubelet[3104]: E0510 00:21:55.091650 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.092195 kubelet[3104]: E0510 00:21:55.092128 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.092195 kubelet[3104]: W0510 00:21:55.092143 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.092647 kubelet[3104]: E0510 00:21:55.092364 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.092647 kubelet[3104]: E0510 00:21:55.092544 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.092647 kubelet[3104]: W0510 00:21:55.092556 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.092885 kubelet[3104]: E0510 00:21:55.092858 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.093090 kubelet[3104]: E0510 00:21:55.093047 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.093090 kubelet[3104]: W0510 00:21:55.093059 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.093205 kubelet[3104]: E0510 00:21:55.093189 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.093605 kubelet[3104]: E0510 00:21:55.093521 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.093605 kubelet[3104]: W0510 00:21:55.093535 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.093605 kubelet[3104]: E0510 00:21:55.093573 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.095225 kubelet[3104]: E0510 00:21:55.095106 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.095225 kubelet[3104]: W0510 00:21:55.095122 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.095225 kubelet[3104]: E0510 00:21:55.095146 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.095879 kubelet[3104]: E0510 00:21:55.095527 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.095879 kubelet[3104]: W0510 00:21:55.095538 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.095879 kubelet[3104]: E0510 00:21:55.095614 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.095879 kubelet[3104]: E0510 00:21:55.095759 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.095879 kubelet[3104]: W0510 00:21:55.095767 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.095879 kubelet[3104]: E0510 00:21:55.095785 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.114558 systemd[1]: Started cri-containerd-5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722.scope - libcontainer container 5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722. May 10 00:21:55.120214 kubelet[3104]: E0510 00:21:55.120180 3104 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 10 00:21:55.120214 kubelet[3104]: W0510 00:21:55.120206 3104 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 10 00:21:55.120214 kubelet[3104]: E0510 00:21:55.120229 3104 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 10 00:21:55.165359 containerd[1476]: time="2025-05-10T00:21:55.163617980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xqr4w,Uid:2922fc29-bd9c-4f44-bea3-98ee1c2da728,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722\"" May 10 00:21:55.167581 containerd[1476]: time="2025-05-10T00:21:55.167387443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 10 00:21:55.170028 containerd[1476]: time="2025-05-10T00:21:55.168957453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c56f48f8b-r62vc,Uid:d14df19d-28f4-4031-8acf-6b9943c29fb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"829de31f6dbcac9785c3635a00ef0acc28e72694701f9008bd46378c5d8ac5d7\"" May 10 00:21:55.478403 sshd[3567]: Connection closed by authenticating user root 60.164.133.37 port 58430 [preauth] May 10 00:21:55.481272 systemd[1]: sshd@93-91.107.204.139:22-60.164.133.37:58430.service: Deactivated successfully. May 10 00:21:55.681068 systemd[1]: Started sshd@94-91.107.204.139:22-60.164.133.37:59826.service - OpenSSH per-connection server daemon (60.164.133.37:59826). May 10 00:21:56.632393 sshd[3749]: Connection closed by authenticating user root 60.164.133.37 port 59826 [preauth] May 10 00:21:56.637136 systemd[1]: sshd@94-91.107.204.139:22-60.164.133.37:59826.service: Deactivated successfully. May 10 00:21:56.794339 containerd[1476]: time="2025-05-10T00:21:56.794092194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:56.795999 containerd[1476]: time="2025-05-10T00:21:56.795696883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 10 00:21:56.798379 containerd[1476]: time="2025-05-10T00:21:56.797120732Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:56.799843 containerd[1476]: time="2025-05-10T00:21:56.799800028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:56.800691 containerd[1476]: time="2025-05-10T00:21:56.800657633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.633078109s" May 10 00:21:56.800825 containerd[1476]: time="2025-05-10T00:21:56.800804674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 10 00:21:56.801992 containerd[1476]: time="2025-05-10T00:21:56.801953001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 10 00:21:56.806228 containerd[1476]: time="2025-05-10T00:21:56.806195547Z" level=info msg="CreateContainer within sandbox \"5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 10 00:21:56.828418 containerd[1476]: time="2025-05-10T00:21:56.828376321Z" level=info msg="CreateContainer within sandbox \"5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26\"" May 10 00:21:56.831036 containerd[1476]: time="2025-05-10T00:21:56.830435173Z" level=info msg="StartContainer for \"93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26\"" May 10 00:21:56.844561 systemd[1]: Started sshd@95-91.107.204.139:22-60.164.133.37:32996.service - OpenSSH per-connection server daemon (60.164.133.37:32996). May 10 00:21:56.884527 systemd[1]: Started cri-containerd-93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26.scope - libcontainer container 93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26. May 10 00:21:56.918776 containerd[1476]: time="2025-05-10T00:21:56.918655027Z" level=info msg="StartContainer for \"93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26\" returns successfully" May 10 00:21:56.938602 systemd[1]: cri-containerd-93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26.scope: Deactivated successfully. May 10 00:21:56.958791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26-rootfs.mount: Deactivated successfully. May 10 00:21:57.014341 containerd[1476]: time="2025-05-10T00:21:57.014170724Z" level=info msg="shim disconnected" id=93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26 namespace=k8s.io May 10 00:21:57.014341 containerd[1476]: time="2025-05-10T00:21:57.014281684Z" level=warning msg="cleaning up after shim disconnected" id=93e5a01f18418bf1e495757bdee1dc8ba25c94f34edf50f634418a6866baae26 namespace=k8s.io May 10 00:21:57.014341 containerd[1476]: time="2025-05-10T00:21:57.014344565Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:21:57.211339 kubelet[3104]: E0510 00:21:57.210210 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-55f69" podUID="9fe5f5d0-455a-4b01-9791-41d2483185a9" May 10 00:21:57.806187 sshd[3758]: Connection closed by authenticating user root 60.164.133.37 port 32996 [preauth] May 10 00:21:57.808211 systemd[1]: sshd@95-91.107.204.139:22-60.164.133.37:32996.service: Deactivated successfully. May 10 00:21:58.004473 systemd[1]: Started sshd@96-91.107.204.139:22-60.164.133.37:34582.service - OpenSSH per-connection server daemon (60.164.133.37:34582). May 10 00:21:58.946042 sshd[3830]: Connection closed by authenticating user root 60.164.133.37 port 34582 [preauth] May 10 00:21:58.949678 systemd[1]: sshd@96-91.107.204.139:22-60.164.133.37:34582.service: Deactivated successfully. May 10 00:21:59.149854 systemd[1]: Started sshd@97-91.107.204.139:22-60.164.133.37:35926.service - OpenSSH per-connection server daemon (60.164.133.37:35926). May 10 00:21:59.169416 containerd[1476]: time="2025-05-10T00:21:59.168653887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:59.170765 containerd[1476]: time="2025-05-10T00:21:59.170699219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 10 00:21:59.172016 containerd[1476]: time="2025-05-10T00:21:59.171882865Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:59.175696 containerd[1476]: time="2025-05-10T00:21:59.175634847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:21:59.183104 containerd[1476]: time="2025-05-10T00:21:59.182090163Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.380098442s" May 10 00:21:59.183104 containerd[1476]: time="2025-05-10T00:21:59.182131884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 10 00:21:59.186750 containerd[1476]: time="2025-05-10T00:21:59.185468463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 10 00:21:59.194964 containerd[1476]: time="2025-05-10T00:21:59.194740635Z" level=info msg="CreateContainer within sandbox \"829de31f6dbcac9785c3635a00ef0acc28e72694701f9008bd46378c5d8ac5d7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 10 00:21:59.211799 kubelet[3104]: E0510 00:21:59.211237 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-55f69" podUID="9fe5f5d0-455a-4b01-9791-41d2483185a9" May 10 00:21:59.217443 containerd[1476]: time="2025-05-10T00:21:59.217373284Z" level=info msg="CreateContainer within sandbox \"829de31f6dbcac9785c3635a00ef0acc28e72694701f9008bd46378c5d8ac5d7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c15b817da1e8bb3349106bb2190bb428b47bbe25673e92a74c7d5ff95eeeebbd\"" May 10 00:21:59.218947 containerd[1476]: time="2025-05-10T00:21:59.218843932Z" level=info msg="StartContainer for \"c15b817da1e8bb3349106bb2190bb428b47bbe25673e92a74c7d5ff95eeeebbd\"" May 10 00:21:59.268541 systemd[1]: Started cri-containerd-c15b817da1e8bb3349106bb2190bb428b47bbe25673e92a74c7d5ff95eeeebbd.scope - libcontainer container c15b817da1e8bb3349106bb2190bb428b47bbe25673e92a74c7d5ff95eeeebbd. May 10 00:21:59.312465 containerd[1476]: time="2025-05-10T00:21:59.312358024Z" level=info msg="StartContainer for \"c15b817da1e8bb3349106bb2190bb428b47bbe25673e92a74c7d5ff95eeeebbd\" returns successfully" May 10 00:22:00.095691 sshd[3840]: Connection closed by authenticating user root 60.164.133.37 port 35926 [preauth] May 10 00:22:00.098393 systemd[1]: sshd@97-91.107.204.139:22-60.164.133.37:35926.service: Deactivated successfully. May 10 00:22:00.296751 systemd[1]: Started sshd@98-91.107.204.139:22-60.164.133.37:37132.service - OpenSSH per-connection server daemon (60.164.133.37:37132). May 10 00:22:00.355959 kubelet[3104]: I0510 00:22:00.355733 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:22:01.214068 kubelet[3104]: E0510 00:22:01.210411 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-55f69" podUID="9fe5f5d0-455a-4b01-9791-41d2483185a9" May 10 00:22:01.229680 sshd[3883]: Connection closed by authenticating user root 60.164.133.37 port 37132 [preauth] May 10 00:22:01.232699 systemd[1]: sshd@98-91.107.204.139:22-60.164.133.37:37132.service: Deactivated successfully. May 10 00:22:01.441717 systemd[1]: Started sshd@99-91.107.204.139:22-60.164.133.37:38556.service - OpenSSH per-connection server daemon (60.164.133.37:38556). May 10 00:22:02.402776 sshd[3888]: Connection closed by authenticating user root 60.164.133.37 port 38556 [preauth] May 10 00:22:02.406113 systemd[1]: sshd@99-91.107.204.139:22-60.164.133.37:38556.service: Deactivated successfully. May 10 00:22:02.610830 systemd[1]: Started sshd@100-91.107.204.139:22-60.164.133.37:40170.service - OpenSSH per-connection server daemon (60.164.133.37:40170). May 10 00:22:03.211877 kubelet[3104]: E0510 00:22:03.211792 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-55f69" podUID="9fe5f5d0-455a-4b01-9791-41d2483185a9" May 10 00:22:03.563925 sshd[3893]: Connection closed by authenticating user root 60.164.133.37 port 40170 [preauth] May 10 00:22:03.567212 systemd[1]: sshd@100-91.107.204.139:22-60.164.133.37:40170.service: Deactivated successfully. May 10 00:22:03.777028 systemd[1]: Started sshd@101-91.107.204.139:22-60.164.133.37:41478.service - OpenSSH per-connection server daemon (60.164.133.37:41478). May 10 00:22:04.159097 containerd[1476]: time="2025-05-10T00:22:04.159025268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:04.162656 containerd[1476]: time="2025-05-10T00:22:04.162586087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 10 00:22:04.163987 containerd[1476]: time="2025-05-10T00:22:04.163931414Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:04.167638 containerd[1476]: time="2025-05-10T00:22:04.167539472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:04.169111 containerd[1476]: time="2025-05-10T00:22:04.169048400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.983227935s" May 10 00:22:04.169111 containerd[1476]: time="2025-05-10T00:22:04.169088520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 10 00:22:04.179210 containerd[1476]: time="2025-05-10T00:22:04.178368288Z" level=info msg="CreateContainer within sandbox \"5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 10 00:22:04.195075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866118550.mount: Deactivated successfully. May 10 00:22:04.197800 containerd[1476]: time="2025-05-10T00:22:04.197759068Z" level=info msg="CreateContainer within sandbox \"5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154\"" May 10 00:22:04.199565 containerd[1476]: time="2025-05-10T00:22:04.199466037Z" level=info msg="StartContainer for \"afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154\"" May 10 00:22:04.241559 systemd[1]: Started cri-containerd-afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154.scope - libcontainer container afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154. May 10 00:22:04.272481 containerd[1476]: time="2025-05-10T00:22:04.272369292Z" level=info msg="StartContainer for \"afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154\" returns successfully" May 10 00:22:04.389569 kubelet[3104]: I0510 00:22:04.389481 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c56f48f8b-r62vc" podStartSLOduration=6.376221584 podStartE2EDuration="10.389464695s" podCreationTimestamp="2025-05-10 00:21:54 +0000 UTC" firstStartedPulling="2025-05-10 00:21:55.170529982 +0000 UTC m=+22.089729577" lastFinishedPulling="2025-05-10 00:21:59.183773093 +0000 UTC m=+26.102972688" observedRunningTime="2025-05-10 00:21:59.371033118 +0000 UTC m=+26.290232713" watchObservedRunningTime="2025-05-10 00:22:04.389464695 +0000 UTC m=+31.308664290" May 10 00:22:04.753173 sshd[3903]: Connection closed by authenticating user root 60.164.133.37 port 41478 [preauth] May 10 00:22:04.759703 systemd[1]: sshd@101-91.107.204.139:22-60.164.133.37:41478.service: Deactivated successfully. May 10 00:22:04.791534 systemd[1]: cri-containerd-afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154.scope: Deactivated successfully. May 10 00:22:04.836151 kubelet[3104]: I0510 00:22:04.835363 3104 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 00:22:04.866388 kubelet[3104]: I0510 00:22:04.865129 3104 topology_manager.go:215] "Topology Admit Handler" podUID="e1040029-c5db-4540-b478-e7a12da2dd29" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ffbwh" May 10 00:22:04.870259 kubelet[3104]: I0510 00:22:04.869067 3104 topology_manager.go:215] "Topology Admit Handler" podUID="42b78a76-4b54-49d0-9136-33da1a2535fc" podNamespace="calico-apiserver" podName="calico-apiserver-5ff6889754-226jl" May 10 00:22:04.873090 kubelet[3104]: I0510 00:22:04.872815 3104 topology_manager.go:215] "Topology Admit Handler" podUID="ea774e10-d5e8-49b4-87ba-cd65e20304d9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h4s4c" May 10 00:22:04.878191 kubelet[3104]: I0510 00:22:04.876372 3104 topology_manager.go:215] "Topology Admit Handler" podUID="09b51544-4344-4b33-9ab6-929ff2781faf" podNamespace="calico-system" podName="calico-kube-controllers-84fbc7f9c5-z5595" May 10 00:22:04.879823 systemd[1]: Created slice kubepods-burstable-pode1040029_c5db_4540_b478_e7a12da2dd29.slice - libcontainer container kubepods-burstable-pode1040029_c5db_4540_b478_e7a12da2dd29.slice. May 10 00:22:04.886611 kubelet[3104]: I0510 00:22:04.885813 3104 topology_manager.go:215] "Topology Admit Handler" podUID="a161c57b-ec58-4358-a6d8-144a63406336" podNamespace="calico-apiserver" podName="calico-apiserver-5ff6889754-2dhwk" May 10 00:22:04.905225 systemd[1]: Created slice kubepods-burstable-podea774e10_d5e8_49b4_87ba_cd65e20304d9.slice - libcontainer container kubepods-burstable-podea774e10_d5e8_49b4_87ba_cd65e20304d9.slice. May 10 00:22:04.918104 systemd[1]: Created slice kubepods-besteffort-pod42b78a76_4b54_49d0_9136_33da1a2535fc.slice - libcontainer container kubepods-besteffort-pod42b78a76_4b54_49d0_9136_33da1a2535fc.slice. May 10 00:22:04.927906 systemd[1]: Created slice kubepods-besteffort-pod09b51544_4344_4b33_9ab6_929ff2781faf.slice - libcontainer container kubepods-besteffort-pod09b51544_4344_4b33_9ab6_929ff2781faf.slice. May 10 00:22:04.935618 systemd[1]: Created slice kubepods-besteffort-poda161c57b_ec58_4358_a6d8_144a63406336.slice - libcontainer container kubepods-besteffort-poda161c57b_ec58_4358_a6d8_144a63406336.slice. May 10 00:22:04.949128 containerd[1476]: time="2025-05-10T00:22:04.948954456Z" level=info msg="shim disconnected" id=afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154 namespace=k8s.io May 10 00:22:04.949128 containerd[1476]: time="2025-05-10T00:22:04.949020056Z" level=warning msg="cleaning up after shim disconnected" id=afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154 namespace=k8s.io May 10 00:22:04.949128 containerd[1476]: time="2025-05-10T00:22:04.949030456Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:22:04.959196 kubelet[3104]: I0510 00:22:04.956477 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czt8f\" (UniqueName: \"kubernetes.io/projected/09b51544-4344-4b33-9ab6-929ff2781faf-kube-api-access-czt8f\") pod \"calico-kube-controllers-84fbc7f9c5-z5595\" (UID: \"09b51544-4344-4b33-9ab6-929ff2781faf\") " pod="calico-system/calico-kube-controllers-84fbc7f9c5-z5595" May 10 00:22:04.959196 kubelet[3104]: I0510 00:22:04.956540 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl5n4\" (UniqueName: \"kubernetes.io/projected/a161c57b-ec58-4358-a6d8-144a63406336-kube-api-access-fl5n4\") pod \"calico-apiserver-5ff6889754-2dhwk\" (UID: \"a161c57b-ec58-4358-a6d8-144a63406336\") " pod="calico-apiserver/calico-apiserver-5ff6889754-2dhwk" May 10 00:22:04.959196 kubelet[3104]: I0510 00:22:04.956570 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwm77\" (UniqueName: \"kubernetes.io/projected/e1040029-c5db-4540-b478-e7a12da2dd29-kube-api-access-gwm77\") pod \"coredns-7db6d8ff4d-ffbwh\" (UID: \"e1040029-c5db-4540-b478-e7a12da2dd29\") " pod="kube-system/coredns-7db6d8ff4d-ffbwh" May 10 00:22:04.959196 kubelet[3104]: I0510 00:22:04.956613 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42b78a76-4b54-49d0-9136-33da1a2535fc-calico-apiserver-certs\") pod \"calico-apiserver-5ff6889754-226jl\" (UID: \"42b78a76-4b54-49d0-9136-33da1a2535fc\") " pod="calico-apiserver/calico-apiserver-5ff6889754-226jl" May 10 00:22:04.959196 kubelet[3104]: I0510 00:22:04.956632 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea774e10-d5e8-49b4-87ba-cd65e20304d9-config-volume\") pod \"coredns-7db6d8ff4d-h4s4c\" (UID: \"ea774e10-d5e8-49b4-87ba-cd65e20304d9\") " pod="kube-system/coredns-7db6d8ff4d-h4s4c" May 10 00:22:04.959467 kubelet[3104]: I0510 00:22:04.956648 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a161c57b-ec58-4358-a6d8-144a63406336-calico-apiserver-certs\") pod \"calico-apiserver-5ff6889754-2dhwk\" (UID: \"a161c57b-ec58-4358-a6d8-144a63406336\") " pod="calico-apiserver/calico-apiserver-5ff6889754-2dhwk" May 10 00:22:04.959467 kubelet[3104]: I0510 00:22:04.956669 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1040029-c5db-4540-b478-e7a12da2dd29-config-volume\") pod \"coredns-7db6d8ff4d-ffbwh\" (UID: \"e1040029-c5db-4540-b478-e7a12da2dd29\") " pod="kube-system/coredns-7db6d8ff4d-ffbwh" May 10 00:22:04.959467 kubelet[3104]: I0510 00:22:04.956688 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfzd\" (UniqueName: \"kubernetes.io/projected/ea774e10-d5e8-49b4-87ba-cd65e20304d9-kube-api-access-5dfzd\") pod \"coredns-7db6d8ff4d-h4s4c\" (UID: \"ea774e10-d5e8-49b4-87ba-cd65e20304d9\") " pod="kube-system/coredns-7db6d8ff4d-h4s4c" May 10 00:22:04.959467 kubelet[3104]: I0510 00:22:04.956705 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lb6r\" (UniqueName: \"kubernetes.io/projected/42b78a76-4b54-49d0-9136-33da1a2535fc-kube-api-access-4lb6r\") pod \"calico-apiserver-5ff6889754-226jl\" (UID: \"42b78a76-4b54-49d0-9136-33da1a2535fc\") " pod="calico-apiserver/calico-apiserver-5ff6889754-226jl" May 10 00:22:04.959467 kubelet[3104]: I0510 00:22:04.956725 3104 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b51544-4344-4b33-9ab6-929ff2781faf-tigera-ca-bundle\") pod \"calico-kube-controllers-84fbc7f9c5-z5595\" (UID: \"09b51544-4344-4b33-9ab6-929ff2781faf\") " pod="calico-system/calico-kube-controllers-84fbc7f9c5-z5595" May 10 00:22:04.966779 systemd[1]: Started sshd@102-91.107.204.139:22-60.164.133.37:42848.service - OpenSSH per-connection server daemon (60.164.133.37:42848). May 10 00:22:05.194660 containerd[1476]: time="2025-05-10T00:22:05.194608462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ffbwh,Uid:e1040029-c5db-4540-b478-e7a12da2dd29,Namespace:kube-system,Attempt:0,}" May 10 00:22:05.204464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afd0f606759cb7ca580c91f0662792d1737048335cef85e02f94ff65de9a4154-rootfs.mount: Deactivated successfully. May 10 00:22:05.216018 containerd[1476]: time="2025-05-10T00:22:05.215967290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h4s4c,Uid:ea774e10-d5e8-49b4-87ba-cd65e20304d9,Namespace:kube-system,Attempt:0,}" May 10 00:22:05.228287 systemd[1]: Created slice kubepods-besteffort-pod9fe5f5d0_455a_4b01_9791_41d2483185a9.slice - libcontainer container kubepods-besteffort-pod9fe5f5d0_455a_4b01_9791_41d2483185a9.slice. May 10 00:22:05.231703 containerd[1476]: time="2025-05-10T00:22:05.231603769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff6889754-226jl,Uid:42b78a76-4b54-49d0-9136-33da1a2535fc,Namespace:calico-apiserver,Attempt:0,}" May 10 00:22:05.240581 containerd[1476]: time="2025-05-10T00:22:05.240540094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-55f69,Uid:9fe5f5d0-455a-4b01-9791-41d2483185a9,Namespace:calico-system,Attempt:0,}" May 10 00:22:05.240913 containerd[1476]: time="2025-05-10T00:22:05.240883376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbc7f9c5-z5595,Uid:09b51544-4344-4b33-9ab6-929ff2781faf,Namespace:calico-system,Attempt:0,}" May 10 00:22:05.254333 containerd[1476]: time="2025-05-10T00:22:05.254274563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff6889754-2dhwk,Uid:a161c57b-ec58-4358-a6d8-144a63406336,Namespace:calico-apiserver,Attempt:0,}" May 10 00:22:05.345644 containerd[1476]: time="2025-05-10T00:22:05.344705900Z" level=error msg="Failed to destroy network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.346191 containerd[1476]: time="2025-05-10T00:22:05.346137227Z" level=error msg="encountered an error cleaning up failed sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.346375 containerd[1476]: time="2025-05-10T00:22:05.346346348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ffbwh,Uid:e1040029-c5db-4540-b478-e7a12da2dd29,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.346803 kubelet[3104]: E0510 00:22:05.346745 3104 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.346893 kubelet[3104]: E0510 00:22:05.346816 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ffbwh" May 10 00:22:05.346893 kubelet[3104]: E0510 00:22:05.346839 3104 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-ffbwh" May 10 00:22:05.346957 kubelet[3104]: E0510 00:22:05.346880 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-ffbwh_kube-system(e1040029-c5db-4540-b478-e7a12da2dd29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-ffbwh_kube-system(e1040029-c5db-4540-b478-e7a12da2dd29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ffbwh" podUID="e1040029-c5db-4540-b478-e7a12da2dd29" May 10 00:22:05.382875 containerd[1476]: time="2025-05-10T00:22:05.382747812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 10 00:22:05.389484 kubelet[3104]: I0510 00:22:05.389441 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:05.391570 containerd[1476]: time="2025-05-10T00:22:05.391374176Z" level=info msg="StopPodSandbox for \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\"" May 10 00:22:05.391661 containerd[1476]: time="2025-05-10T00:22:05.391587177Z" level=info msg="Ensure that sandbox 94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b in task-service has been cleanup successfully" May 10 00:22:05.483086 containerd[1476]: time="2025-05-10T00:22:05.482919598Z" level=error msg="Failed to destroy network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.484276 containerd[1476]: time="2025-05-10T00:22:05.484177885Z" level=error msg="encountered an error cleaning up failed sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.484276 containerd[1476]: time="2025-05-10T00:22:05.484263125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff6889754-226jl,Uid:42b78a76-4b54-49d0-9136-33da1a2535fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.485086 kubelet[3104]: E0510 00:22:05.485029 3104 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.487152 kubelet[3104]: E0510 00:22:05.485106 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff6889754-226jl" May 10 00:22:05.487152 kubelet[3104]: E0510 00:22:05.485128 3104 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff6889754-226jl" May 10 00:22:05.487152 kubelet[3104]: E0510 00:22:05.485192 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ff6889754-226jl_calico-apiserver(42b78a76-4b54-49d0-9136-33da1a2535fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ff6889754-226jl_calico-apiserver(42b78a76-4b54-49d0-9136-33da1a2535fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff6889754-226jl" podUID="42b78a76-4b54-49d0-9136-33da1a2535fc" May 10 00:22:05.503718 containerd[1476]: time="2025-05-10T00:22:05.503576263Z" level=error msg="Failed to destroy network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.504239 containerd[1476]: time="2025-05-10T00:22:05.504060945Z" level=error msg="encountered an error cleaning up failed sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.504239 containerd[1476]: time="2025-05-10T00:22:05.504118585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h4s4c,Uid:ea774e10-d5e8-49b4-87ba-cd65e20304d9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.504677 kubelet[3104]: E0510 00:22:05.504504 3104 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.504677 kubelet[3104]: E0510 00:22:05.504561 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h4s4c" May 10 00:22:05.504677 kubelet[3104]: E0510 00:22:05.504583 3104 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h4s4c" May 10 00:22:05.504911 kubelet[3104]: E0510 00:22:05.504626 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h4s4c_kube-system(ea774e10-d5e8-49b4-87ba-cd65e20304d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h4s4c_kube-system(ea774e10-d5e8-49b4-87ba-cd65e20304d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h4s4c" podUID="ea774e10-d5e8-49b4-87ba-cd65e20304d9" May 10 00:22:05.512945 containerd[1476]: time="2025-05-10T00:22:05.512751429Z" level=error msg="StopPodSandbox for \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\" failed" error="failed to destroy network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.513054 kubelet[3104]: E0510 00:22:05.512977 3104 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:05.513104 kubelet[3104]: E0510 00:22:05.513032 3104 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b"} May 10 00:22:05.513104 kubelet[3104]: E0510 00:22:05.513087 3104 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1040029-c5db-4540-b478-e7a12da2dd29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:22:05.513179 kubelet[3104]: E0510 00:22:05.513107 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1040029-c5db-4540-b478-e7a12da2dd29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-ffbwh" podUID="e1040029-c5db-4540-b478-e7a12da2dd29" May 10 00:22:05.517011 containerd[1476]: time="2025-05-10T00:22:05.516025245Z" level=error msg="Failed to destroy network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.517011 containerd[1476]: time="2025-05-10T00:22:05.516548328Z" level=error msg="encountered an error cleaning up failed sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.517011 containerd[1476]: time="2025-05-10T00:22:05.516606128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-55f69,Uid:9fe5f5d0-455a-4b01-9791-41d2483185a9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.517307 kubelet[3104]: E0510 00:22:05.517243 3104 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.517456 containerd[1476]: time="2025-05-10T00:22:05.517411892Z" level=error msg="Failed to destroy network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.517585 kubelet[3104]: E0510 00:22:05.517556 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-55f69" May 10 00:22:05.517666 kubelet[3104]: E0510 00:22:05.517589 3104 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-55f69" May 10 00:22:05.517666 kubelet[3104]: E0510 00:22:05.517636 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-55f69_calico-system(9fe5f5d0-455a-4b01-9791-41d2483185a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-55f69_calico-system(9fe5f5d0-455a-4b01-9791-41d2483185a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-55f69" podUID="9fe5f5d0-455a-4b01-9791-41d2483185a9" May 10 00:22:05.518205 containerd[1476]: time="2025-05-10T00:22:05.518153856Z" level=error msg="encountered an error cleaning up failed sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.518272 containerd[1476]: time="2025-05-10T00:22:05.518219376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbc7f9c5-z5595,Uid:09b51544-4344-4b33-9ab6-929ff2781faf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.518531 kubelet[3104]: E0510 00:22:05.518478 3104 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.518589 kubelet[3104]: E0510 00:22:05.518539 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fbc7f9c5-z5595" May 10 00:22:05.518589 kubelet[3104]: E0510 00:22:05.518557 3104 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fbc7f9c5-z5595" May 10 00:22:05.518638 kubelet[3104]: E0510 00:22:05.518593 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84fbc7f9c5-z5595_calico-system(09b51544-4344-4b33-9ab6-929ff2781faf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84fbc7f9c5-z5595_calico-system(09b51544-4344-4b33-9ab6-929ff2781faf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84fbc7f9c5-z5595" podUID="09b51544-4344-4b33-9ab6-929ff2781faf" May 10 00:22:05.526342 containerd[1476]: time="2025-05-10T00:22:05.526236297Z" level=error msg="Failed to destroy network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.526840 containerd[1476]: time="2025-05-10T00:22:05.526796900Z" level=error msg="encountered an error cleaning up failed sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.527068 containerd[1476]: time="2025-05-10T00:22:05.527040541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff6889754-2dhwk,Uid:a161c57b-ec58-4358-a6d8-144a63406336,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.527460 kubelet[3104]: E0510 00:22:05.527423 3104 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:05.528129 kubelet[3104]: E0510 00:22:05.527587 3104 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff6889754-2dhwk" May 10 00:22:05.528264 kubelet[3104]: E0510 00:22:05.528241 3104 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ff6889754-2dhwk" May 10 00:22:05.528676 kubelet[3104]: E0510 00:22:05.528383 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ff6889754-2dhwk_calico-apiserver(a161c57b-ec58-4358-a6d8-144a63406336)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ff6889754-2dhwk_calico-apiserver(a161c57b-ec58-4358-a6d8-144a63406336)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff6889754-2dhwk" podUID="a161c57b-ec58-4358-a6d8-144a63406336" May 10 00:22:05.918888 sshd[3960]: Connection closed by authenticating user root 60.164.133.37 port 42848 [preauth] May 10 00:22:05.922582 systemd[1]: sshd@102-91.107.204.139:22-60.164.133.37:42848.service: Deactivated successfully. May 10 00:22:06.121792 systemd[1]: Started sshd@103-91.107.204.139:22-60.164.133.37:44244.service - OpenSSH per-connection server daemon (60.164.133.37:44244). May 10 00:22:06.193156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a-shm.mount: Deactivated successfully. May 10 00:22:06.193322 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b-shm.mount: Deactivated successfully. May 10 00:22:06.394528 kubelet[3104]: I0510 00:22:06.394244 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:06.396359 containerd[1476]: time="2025-05-10T00:22:06.396117614Z" level=info msg="StopPodSandbox for \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\"" May 10 00:22:06.398031 containerd[1476]: time="2025-05-10T00:22:06.396925138Z" level=info msg="Ensure that sandbox 344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837 in task-service has been cleanup successfully" May 10 00:22:06.398954 kubelet[3104]: I0510 00:22:06.398922 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:06.402607 containerd[1476]: time="2025-05-10T00:22:06.402571486Z" level=info msg="StopPodSandbox for \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\"" May 10 00:22:06.402747 containerd[1476]: time="2025-05-10T00:22:06.402728247Z" level=info msg="Ensure that sandbox 0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422 in task-service has been cleanup successfully" May 10 00:22:06.404844 kubelet[3104]: I0510 00:22:06.404742 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:06.405977 containerd[1476]: time="2025-05-10T00:22:06.405798902Z" level=info msg="StopPodSandbox for \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\"" May 10 00:22:06.405977 containerd[1476]: time="2025-05-10T00:22:06.405955103Z" level=info msg="Ensure that sandbox 070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0 in task-service has been cleanup successfully" May 10 00:22:06.407205 kubelet[3104]: I0510 00:22:06.406972 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:06.411016 containerd[1476]: time="2025-05-10T00:22:06.410271684Z" level=info msg="StopPodSandbox for \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\"" May 10 00:22:06.411445 containerd[1476]: time="2025-05-10T00:22:06.411415170Z" level=info msg="Ensure that sandbox 867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a in task-service has been cleanup successfully" May 10 00:22:06.411470 kubelet[3104]: I0510 00:22:06.411152 3104 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:06.414100 containerd[1476]: time="2025-05-10T00:22:06.414059783Z" level=info msg="StopPodSandbox for \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\"" May 10 00:22:06.415248 containerd[1476]: time="2025-05-10T00:22:06.415190509Z" level=info msg="Ensure that sandbox 4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041 in task-service has been cleanup successfully" May 10 00:22:06.472213 containerd[1476]: time="2025-05-10T00:22:06.471386667Z" level=error msg="StopPodSandbox for \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\" failed" error="failed to destroy network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:06.472381 kubelet[3104]: E0510 00:22:06.471983 3104 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:06.472381 kubelet[3104]: E0510 00:22:06.472025 3104 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837"} May 10 00:22:06.472381 kubelet[3104]: E0510 00:22:06.472060 3104 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09b51544-4344-4b33-9ab6-929ff2781faf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:22:06.472381 kubelet[3104]: E0510 00:22:06.472097 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09b51544-4344-4b33-9ab6-929ff2781faf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84fbc7f9c5-z5595" podUID="09b51544-4344-4b33-9ab6-929ff2781faf" May 10 00:22:06.481083 containerd[1476]: time="2025-05-10T00:22:06.481027795Z" level=error msg="StopPodSandbox for \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\" failed" error="failed to destroy network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:06.481647 kubelet[3104]: E0510 00:22:06.481469 3104 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:06.481730 kubelet[3104]: E0510 00:22:06.481662 3104 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a"} May 10 00:22:06.481730 kubelet[3104]: E0510 00:22:06.481714 3104 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea774e10-d5e8-49b4-87ba-cd65e20304d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:22:06.481818 kubelet[3104]: E0510 00:22:06.481735 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea774e10-d5e8-49b4-87ba-cd65e20304d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h4s4c" podUID="ea774e10-d5e8-49b4-87ba-cd65e20304d9" May 10 00:22:06.490127 containerd[1476]: time="2025-05-10T00:22:06.488877954Z" level=error msg="StopPodSandbox for \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\" failed" error="failed to destroy network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:06.490448 kubelet[3104]: E0510 00:22:06.490399 3104 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:06.490915 kubelet[3104]: E0510 00:22:06.490457 3104 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422"} May 10 00:22:06.490915 kubelet[3104]: E0510 00:22:06.490532 3104 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9fe5f5d0-455a-4b01-9791-41d2483185a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:22:06.490915 kubelet[3104]: E0510 00:22:06.490560 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9fe5f5d0-455a-4b01-9791-41d2483185a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-55f69" podUID="9fe5f5d0-455a-4b01-9791-41d2483185a9" May 10 00:22:06.493089 containerd[1476]: time="2025-05-10T00:22:06.493014335Z" level=error msg="StopPodSandbox for \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\" failed" error="failed to destroy network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:06.493321 kubelet[3104]: E0510 00:22:06.493245 3104 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:06.493372 kubelet[3104]: E0510 00:22:06.493345 3104 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0"} May 10 00:22:06.493405 kubelet[3104]: E0510 00:22:06.493380 3104 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42b78a76-4b54-49d0-9136-33da1a2535fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:22:06.494776 kubelet[3104]: E0510 00:22:06.493400 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42b78a76-4b54-49d0-9136-33da1a2535fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff6889754-226jl" podUID="42b78a76-4b54-49d0-9136-33da1a2535fc" May 10 00:22:06.500452 containerd[1476]: time="2025-05-10T00:22:06.500369171Z" level=error msg="StopPodSandbox for \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\" failed" error="failed to destroy network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 10 00:22:06.500719 kubelet[3104]: E0510 00:22:06.500671 3104 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:06.500799 kubelet[3104]: E0510 00:22:06.500728 3104 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041"} May 10 00:22:06.500799 kubelet[3104]: E0510 00:22:06.500764 3104 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a161c57b-ec58-4358-a6d8-144a63406336\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 10 00:22:06.500799 kubelet[3104]: E0510 00:22:06.500786 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a161c57b-ec58-4358-a6d8-144a63406336\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ff6889754-2dhwk" podUID="a161c57b-ec58-4358-a6d8-144a63406336" May 10 00:22:07.054036 sshd[4183]: Connection closed by authenticating user root 60.164.133.37 port 44244 [preauth] May 10 00:22:07.059760 systemd[1]: sshd@103-91.107.204.139:22-60.164.133.37:44244.service: Deactivated successfully. May 10 00:22:07.263852 systemd[1]: Started sshd@104-91.107.204.139:22-60.164.133.37:45488.service - OpenSSH per-connection server daemon (60.164.133.37:45488). May 10 00:22:08.223249 sshd[4280]: Connection closed by authenticating user root 60.164.133.37 port 45488 [preauth] May 10 00:22:08.226830 systemd[1]: sshd@104-91.107.204.139:22-60.164.133.37:45488.service: Deactivated successfully. May 10 00:22:08.424731 systemd[1]: Started sshd@105-91.107.204.139:22-60.164.133.37:47028.service - OpenSSH per-connection server daemon (60.164.133.37:47028). May 10 00:22:09.365423 sshd[4285]: Connection closed by authenticating user root 60.164.133.37 port 47028 [preauth] May 10 00:22:09.370062 systemd[1]: sshd@105-91.107.204.139:22-60.164.133.37:47028.service: Deactivated successfully. May 10 00:22:09.571048 systemd[1]: Started sshd@106-91.107.204.139:22-60.164.133.37:48426.service - OpenSSH per-connection server daemon (60.164.133.37:48426). May 10 00:22:10.521118 sshd[4290]: Connection closed by authenticating user root 60.164.133.37 port 48426 [preauth] May 10 00:22:10.524886 systemd[1]: sshd@106-91.107.204.139:22-60.164.133.37:48426.service: Deactivated successfully. May 10 00:22:10.723691 systemd[1]: Started sshd@107-91.107.204.139:22-60.164.133.37:49706.service - OpenSSH per-connection server daemon (60.164.133.37:49706). May 10 00:22:11.683748 sshd[4299]: Connection closed by authenticating user root 60.164.133.37 port 49706 [preauth] May 10 00:22:11.687279 systemd[1]: sshd@107-91.107.204.139:22-60.164.133.37:49706.service: Deactivated successfully. May 10 00:22:11.888841 systemd[1]: Started sshd@108-91.107.204.139:22-60.164.133.37:50988.service - OpenSSH per-connection server daemon (60.164.133.37:50988). May 10 00:22:11.936917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566100211.mount: Deactivated successfully. May 10 00:22:11.969048 containerd[1476]: time="2025-05-10T00:22:11.968978619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:11.970911 containerd[1476]: time="2025-05-10T00:22:11.970876107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 10 00:22:11.972370 containerd[1476]: time="2025-05-10T00:22:11.972330354Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:11.978201 containerd[1476]: time="2025-05-10T00:22:11.978155540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:11.981140 containerd[1476]: time="2025-05-10T00:22:11.980689752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 6.59602353s" May 10 00:22:11.981250 containerd[1476]: time="2025-05-10T00:22:11.981134154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 10 00:22:11.998102 containerd[1476]: time="2025-05-10T00:22:11.998063790Z" level=info msg="CreateContainer within sandbox \"5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 10 00:22:12.016368 containerd[1476]: time="2025-05-10T00:22:12.016319032Z" level=info msg="CreateContainer within sandbox \"5e662c27fa95297a0ed8a11a04c7ff1bc90605a5f5ac7e28d07f159526838722\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f759f747678e0cdb718c082b1a5fa944b0e16bbbec998edf64fe6be9f82f5c01\"" May 10 00:22:12.017324 containerd[1476]: time="2025-05-10T00:22:12.017101235Z" level=info msg="StartContainer for \"f759f747678e0cdb718c082b1a5fa944b0e16bbbec998edf64fe6be9f82f5c01\"" May 10 00:22:12.048495 systemd[1]: Started cri-containerd-f759f747678e0cdb718c082b1a5fa944b0e16bbbec998edf64fe6be9f82f5c01.scope - libcontainer container f759f747678e0cdb718c082b1a5fa944b0e16bbbec998edf64fe6be9f82f5c01. May 10 00:22:12.085471 containerd[1476]: time="2025-05-10T00:22:12.085420939Z" level=info msg="StartContainer for \"f759f747678e0cdb718c082b1a5fa944b0e16bbbec998edf64fe6be9f82f5c01\" returns successfully" May 10 00:22:12.203414 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 10 00:22:12.203662 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 10 00:22:12.834401 sshd[4304]: Connection closed by authenticating user root 60.164.133.37 port 50988 [preauth] May 10 00:22:12.836501 systemd[1]: sshd@108-91.107.204.139:22-60.164.133.37:50988.service: Deactivated successfully. May 10 00:22:13.045720 systemd[1]: Started sshd@109-91.107.204.139:22-60.164.133.37:52402.service - OpenSSH per-connection server daemon (60.164.133.37:52402). May 10 00:22:14.010956 sshd[4395]: Connection closed by authenticating user root 60.164.133.37 port 52402 [preauth] May 10 00:22:14.013742 systemd[1]: sshd@109-91.107.204.139:22-60.164.133.37:52402.service: Deactivated successfully. May 10 00:22:14.216329 systemd[1]: Started sshd@110-91.107.204.139:22-60.164.133.37:53834.service - OpenSSH per-connection server daemon (60.164.133.37:53834). May 10 00:22:15.173825 sshd[4514]: Connection closed by authenticating user root 60.164.133.37 port 53834 [preauth] May 10 00:22:15.176973 systemd[1]: sshd@110-91.107.204.139:22-60.164.133.37:53834.service: Deactivated successfully. May 10 00:22:15.375614 systemd[1]: Started sshd@111-91.107.204.139:22-60.164.133.37:55352.service - OpenSSH per-connection server daemon (60.164.133.37:55352). May 10 00:22:16.315450 sshd[4540]: Connection closed by authenticating user root 60.164.133.37 port 55352 [preauth] May 10 00:22:16.319101 systemd[1]: sshd@111-91.107.204.139:22-60.164.133.37:55352.service: Deactivated successfully. May 10 00:22:16.526579 systemd[1]: Started sshd@112-91.107.204.139:22-60.164.133.37:56700.service - OpenSSH per-connection server daemon (60.164.133.37:56700). May 10 00:22:17.211910 containerd[1476]: time="2025-05-10T00:22:17.211827263Z" level=info msg="StopPodSandbox for \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\"" May 10 00:22:17.288220 kubelet[3104]: I0510 00:22:17.288063 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xqr4w" podStartSLOduration=6.470402484 podStartE2EDuration="23.288041815s" podCreationTimestamp="2025-05-10 00:21:54 +0000 UTC" firstStartedPulling="2025-05-10 00:21:55.165733073 +0000 UTC m=+22.084932668" lastFinishedPulling="2025-05-10 00:22:11.983372444 +0000 UTC m=+38.902571999" observedRunningTime="2025-05-10 00:22:12.460223287 +0000 UTC m=+39.379422882" watchObservedRunningTime="2025-05-10 00:22:17.288041815 +0000 UTC m=+44.207241410" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.285 [INFO][4601] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.285 [INFO][4601] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" iface="eth0" netns="/var/run/netns/cni-2d5bba15-881b-aa91-6631-60db43e5a264" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.286 [INFO][4601] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" iface="eth0" netns="/var/run/netns/cni-2d5bba15-881b-aa91-6631-60db43e5a264" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.287 [INFO][4601] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" iface="eth0" netns="/var/run/netns/cni-2d5bba15-881b-aa91-6631-60db43e5a264" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.287 [INFO][4601] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.287 [INFO][4601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.326 [INFO][4608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.327 [INFO][4608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.327 [INFO][4608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.339 [WARNING][4608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.339 [INFO][4608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.341 [INFO][4608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:17.345431 containerd[1476]: 2025-05-10 00:22:17.343 [INFO][4601] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:17.347695 containerd[1476]: time="2025-05-10T00:22:17.345641411Z" level=info msg="TearDown network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\" successfully" May 10 00:22:17.347695 containerd[1476]: time="2025-05-10T00:22:17.345670611Z" level=info msg="StopPodSandbox for \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\" returns successfully" May 10 00:22:17.348423 systemd[1]: run-netns-cni\x2d2d5bba15\x2d881b\x2daa91\x2d6631\x2d60db43e5a264.mount: Deactivated successfully. May 10 00:22:17.349347 containerd[1476]: time="2025-05-10T00:22:17.348942305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbc7f9c5-z5595,Uid:09b51544-4344-4b33-9ab6-929ff2781faf,Namespace:calico-system,Attempt:1,}" May 10 00:22:17.475889 sshd[4565]: Connection closed by authenticating user root 60.164.133.37 port 56700 [preauth] May 10 00:22:17.478783 systemd[1]: sshd@112-91.107.204.139:22-60.164.133.37:56700.service: Deactivated successfully. May 10 00:22:17.545171 systemd-networkd[1367]: cali847b68e0340: Link UP May 10 00:22:17.545643 systemd-networkd[1367]: cali847b68e0340: Gained carrier May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.415 [INFO][4616] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.433 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0 calico-kube-controllers-84fbc7f9c5- calico-system 09b51544-4344-4b33-9ab6-929ff2781faf 741 0 2025-05-10 00:21:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84fbc7f9c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-2389c948d4 calico-kube-controllers-84fbc7f9c5-z5595 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali847b68e0340 [] []}} ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Namespace="calico-system" Pod="calico-kube-controllers-84fbc7f9c5-z5595" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.433 [INFO][4616] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Namespace="calico-system" Pod="calico-kube-controllers-84fbc7f9c5-z5595" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.483 [INFO][4628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" HandleID="k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.497 [INFO][4628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" HandleID="k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d4a30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-2389c948d4", "pod":"calico-kube-controllers-84fbc7f9c5-z5595", "timestamp":"2025-05-10 00:22:17.483802377 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2389c948d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.497 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.498 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.498 [INFO][4628] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2389c948d4' May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.500 [INFO][4628] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.505 [INFO][4628] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.512 [INFO][4628] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.515 [INFO][4628] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.518 [INFO][4628] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.518 [INFO][4628] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.520 [INFO][4628] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7 May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.526 [INFO][4628] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.532 [INFO][4628] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.193/26] block=192.168.65.192/26 handle="k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.532 [INFO][4628] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.193/26] handle="k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.532 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:17.566218 containerd[1476]: 2025-05-10 00:22:17.532 [INFO][4628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.193/26] IPv6=[] ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" HandleID="k8s-pod-network.ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.567042 containerd[1476]: 2025-05-10 00:22:17.534 [INFO][4616] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Namespace="calico-system" Pod="calico-kube-controllers-84fbc7f9c5-z5595" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0", GenerateName:"calico-kube-controllers-84fbc7f9c5-", Namespace:"calico-system", SelfLink:"", UID:"09b51544-4344-4b33-9ab6-929ff2781faf", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbc7f9c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"", Pod:"calico-kube-controllers-84fbc7f9c5-z5595", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali847b68e0340", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:17.567042 containerd[1476]: 2025-05-10 00:22:17.534 [INFO][4616] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.193/32] ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Namespace="calico-system" Pod="calico-kube-controllers-84fbc7f9c5-z5595" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.567042 containerd[1476]: 2025-05-10 00:22:17.534 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali847b68e0340 ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Namespace="calico-system" Pod="calico-kube-controllers-84fbc7f9c5-z5595" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.567042 containerd[1476]: 2025-05-10 00:22:17.544 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Namespace="calico-system" Pod="calico-kube-controllers-84fbc7f9c5-z5595" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.567042 containerd[1476]: 2025-05-10 00:22:17.546 [INFO][4616] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Namespace="calico-system" Pod="calico-kube-controllers-84fbc7f9c5-z5595" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0", GenerateName:"calico-kube-controllers-84fbc7f9c5-", Namespace:"calico-system", SelfLink:"", UID:"09b51544-4344-4b33-9ab6-929ff2781faf", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbc7f9c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7", Pod:"calico-kube-controllers-84fbc7f9c5-z5595", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali847b68e0340", MAC:"2e:16:b1:2a:bb:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:17.567042 containerd[1476]: 2025-05-10 00:22:17.562 [INFO][4616] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7" Namespace="calico-system" Pod="calico-kube-controllers-84fbc7f9c5-z5595" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:17.588451 containerd[1476]: time="2025-05-10T00:22:17.588123844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:22:17.588882 containerd[1476]: time="2025-05-10T00:22:17.588700366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:22:17.589268 containerd[1476]: time="2025-05-10T00:22:17.589127168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:17.589835 containerd[1476]: time="2025-05-10T00:22:17.589726211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:17.619678 systemd[1]: Started cri-containerd-ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7.scope - libcontainer container ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7. May 10 00:22:17.653065 containerd[1476]: time="2025-05-10T00:22:17.653011670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbc7f9c5-z5595,Uid:09b51544-4344-4b33-9ab6-929ff2781faf,Namespace:calico-system,Attempt:1,} returns sandbox id \"ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7\"" May 10 00:22:17.655333 containerd[1476]: time="2025-05-10T00:22:17.654875157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 10 00:22:17.671702 systemd[1]: Started sshd@113-91.107.204.139:22-60.164.133.37:58088.service - OpenSSH per-connection server daemon (60.164.133.37:58088). May 10 00:22:18.611036 sshd[4691]: Connection closed by authenticating user root 60.164.133.37 port 58088 [preauth] May 10 00:22:18.612681 systemd[1]: sshd@113-91.107.204.139:22-60.164.133.37:58088.service: Deactivated successfully. May 10 00:22:18.815807 systemd[1]: Started sshd@114-91.107.204.139:22-60.164.133.37:59302.service - OpenSSH per-connection server daemon (60.164.133.37:59302). May 10 00:22:19.212995 containerd[1476]: time="2025-05-10T00:22:19.212936166Z" level=info msg="StopPodSandbox for \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\"" May 10 00:22:19.213965 containerd[1476]: time="2025-05-10T00:22:19.213472208Z" level=info msg="StopPodSandbox for \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\"" May 10 00:22:19.215495 containerd[1476]: time="2025-05-10T00:22:19.212954526Z" level=info msg="StopPodSandbox for \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\"" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.304 [INFO][4782] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.306 [INFO][4782] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" iface="eth0" netns="/var/run/netns/cni-c5858934-69de-3ef5-f1ff-bb638fc75f42" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.307 [INFO][4782] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" iface="eth0" netns="/var/run/netns/cni-c5858934-69de-3ef5-f1ff-bb638fc75f42" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.307 [INFO][4782] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" iface="eth0" netns="/var/run/netns/cni-c5858934-69de-3ef5-f1ff-bb638fc75f42" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.307 [INFO][4782] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.308 [INFO][4782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.371 [INFO][4801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.372 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.372 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.386 [WARNING][4801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.386 [INFO][4801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.388 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:19.394540 containerd[1476]: 2025-05-10 00:22:19.392 [INFO][4782] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:19.397731 systemd[1]: run-netns-cni\x2dc5858934\x2d69de\x2d3ef5\x2df1ff\x2dbb638fc75f42.mount: Deactivated successfully. May 10 00:22:19.400408 containerd[1476]: time="2025-05-10T00:22:19.399902188Z" level=info msg="TearDown network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\" successfully" May 10 00:22:19.400724 containerd[1476]: time="2025-05-10T00:22:19.400694951Z" level=info msg="StopPodSandbox for \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\" returns successfully" May 10 00:22:19.402345 containerd[1476]: time="2025-05-10T00:22:19.402283837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h4s4c,Uid:ea774e10-d5e8-49b4-87ba-cd65e20304d9,Namespace:kube-system,Attempt:1,}" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.334 [INFO][4781] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.335 [INFO][4781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" iface="eth0" netns="/var/run/netns/cni-b185568c-b3b4-94f9-38a0-5fc8c702f41e" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.336 [INFO][4781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" iface="eth0" netns="/var/run/netns/cni-b185568c-b3b4-94f9-38a0-5fc8c702f41e" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.336 [INFO][4781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" iface="eth0" netns="/var/run/netns/cni-b185568c-b3b4-94f9-38a0-5fc8c702f41e" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.337 [INFO][4781] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.337 [INFO][4781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.372 [INFO][4809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.373 [INFO][4809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.388 [INFO][4809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.406 [WARNING][4809] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.406 [INFO][4809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.409 [INFO][4809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:19.415968 containerd[1476]: 2025-05-10 00:22:19.413 [INFO][4781] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:19.416600 containerd[1476]: time="2025-05-10T00:22:19.416573854Z" level=info msg="TearDown network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\" successfully" May 10 00:22:19.416674 containerd[1476]: time="2025-05-10T00:22:19.416661134Z" level=info msg="StopPodSandbox for \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\" returns successfully" May 10 00:22:19.421586 systemd-networkd[1367]: cali847b68e0340: Gained IPv6LL May 10 00:22:19.425640 systemd[1]: run-netns-cni\x2db185568c\x2db3b4\x2d94f9\x2d38a0\x2d5fc8c702f41e.mount: Deactivated successfully. May 10 00:22:19.428734 containerd[1476]: time="2025-05-10T00:22:19.427355497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff6889754-2dhwk,Uid:a161c57b-ec58-4358-a6d8-144a63406336,Namespace:calico-apiserver,Attempt:1,}" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.317 [INFO][4787] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.317 [INFO][4787] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" iface="eth0" netns="/var/run/netns/cni-f6551eb4-1af3-d230-4ba8-0ff0700d091b" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.317 [INFO][4787] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" iface="eth0" netns="/var/run/netns/cni-f6551eb4-1af3-d230-4ba8-0ff0700d091b" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.318 [INFO][4787] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" iface="eth0" netns="/var/run/netns/cni-f6551eb4-1af3-d230-4ba8-0ff0700d091b" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.318 [INFO][4787] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.318 [INFO][4787] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.385 [INFO][4807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.385 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.409 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.434 [WARNING][4807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.434 [INFO][4807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.439 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:19.443697 containerd[1476]: 2025-05-10 00:22:19.441 [INFO][4787] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:19.444452 containerd[1476]: time="2025-05-10T00:22:19.444255444Z" level=info msg="TearDown network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\" successfully" May 10 00:22:19.444452 containerd[1476]: time="2025-05-10T00:22:19.444285524Z" level=info msg="StopPodSandbox for \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\" returns successfully" May 10 00:22:19.445821 containerd[1476]: time="2025-05-10T00:22:19.445557169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-55f69,Uid:9fe5f5d0-455a-4b01-9791-41d2483185a9,Namespace:calico-system,Attempt:1,}" May 10 00:22:19.447190 systemd[1]: run-netns-cni\x2df6551eb4\x2d1af3\x2dd230\x2d4ba8\x2d0ff0700d091b.mount: Deactivated successfully. May 10 00:22:19.691462 systemd-networkd[1367]: cali3db4b576531: Link UP May 10 00:22:19.693363 systemd-networkd[1367]: cali3db4b576531: Gained carrier May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.482 [INFO][4825] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.504 [INFO][4825] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0 coredns-7db6d8ff4d- kube-system ea774e10-d5e8-49b4-87ba-cd65e20304d9 752 0 2025-05-10 00:21:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-2389c948d4 coredns-7db6d8ff4d-h4s4c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3db4b576531 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h4s4c" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.505 [INFO][4825] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h4s4c" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.577 [INFO][4859] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" HandleID="k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.616 [INFO][4859] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" HandleID="k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031a670), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-2389c948d4", "pod":"coredns-7db6d8ff4d-h4s4c", "timestamp":"2025-05-10 00:22:19.577398892 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2389c948d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.616 [INFO][4859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.616 [INFO][4859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.616 [INFO][4859] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2389c948d4' May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.627 [INFO][4859] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.635 [INFO][4859] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.648 [INFO][4859] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.651 [INFO][4859] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.655 [INFO][4859] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.655 [INFO][4859] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.664 [INFO][4859] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7 May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.672 [INFO][4859] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.681 [INFO][4859] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.194/26] block=192.168.65.192/26 handle="k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.682 [INFO][4859] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.194/26] handle="k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.682 [INFO][4859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:19.715753 containerd[1476]: 2025-05-10 00:22:19.682 [INFO][4859] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.194/26] IPv6=[] ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" HandleID="k8s-pod-network.448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.717143 containerd[1476]: 2025-05-10 00:22:19.685 [INFO][4825] cni-plugin/k8s.go 386: Populated endpoint ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h4s4c" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ea774e10-d5e8-49b4-87ba-cd65e20304d9", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"", Pod:"coredns-7db6d8ff4d-h4s4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3db4b576531", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:19.717143 containerd[1476]: 2025-05-10 00:22:19.685 [INFO][4825] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.194/32] ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h4s4c" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.717143 containerd[1476]: 2025-05-10 00:22:19.685 [INFO][4825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3db4b576531 ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h4s4c" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.717143 containerd[1476]: 2025-05-10 00:22:19.691 [INFO][4825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h4s4c" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.717143 containerd[1476]: 2025-05-10 00:22:19.691 [INFO][4825] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h4s4c" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ea774e10-d5e8-49b4-87ba-cd65e20304d9", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7", Pod:"coredns-7db6d8ff4d-h4s4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3db4b576531", MAC:"c2:8d:a0:d9:f3:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:19.717143 containerd[1476]: 2025-05-10 00:22:19.710 [INFO][4825] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h4s4c" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:19.748204 sshd[4721]: Connection closed by authenticating user root 60.164.133.37 port 59302 [preauth] May 10 00:22:19.751907 containerd[1476]: time="2025-05-10T00:22:19.748812452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:22:19.751907 containerd[1476]: time="2025-05-10T00:22:19.748945573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:22:19.751907 containerd[1476]: time="2025-05-10T00:22:19.748966093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:19.751907 containerd[1476]: time="2025-05-10T00:22:19.749277294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:19.752781 systemd[1]: sshd@114-91.107.204.139:22-60.164.133.37:59302.service: Deactivated successfully. May 10 00:22:19.775986 systemd-networkd[1367]: calic2ba0503a99: Link UP May 10 00:22:19.778186 systemd-networkd[1367]: calic2ba0503a99: Gained carrier May 10 00:22:19.800741 systemd[1]: Started cri-containerd-448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7.scope - libcontainer container 448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7. May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.536 [INFO][4845] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.562 [INFO][4845] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0 csi-node-driver- calico-system 9fe5f5d0-455a-4b01-9791-41d2483185a9 753 0 2025-05-10 00:21:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-2389c948d4 csi-node-driver-55f69 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic2ba0503a99 [] []}} ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Namespace="calico-system" Pod="csi-node-driver-55f69" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.562 [INFO][4845] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Namespace="calico-system" Pod="csi-node-driver-55f69" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.628 [INFO][4870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" HandleID="k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.668 [INFO][4870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" HandleID="k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011c380), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-2389c948d4", "pod":"csi-node-driver-55f69", "timestamp":"2025-05-10 00:22:19.628321694 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2389c948d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.669 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.682 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.682 [INFO][4870] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2389c948d4' May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.685 [INFO][4870] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.694 [INFO][4870] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.709 [INFO][4870] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.717 [INFO][4870] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.722 [INFO][4870] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.722 [INFO][4870] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.727 [INFO][4870] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.734 [INFO][4870] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.754 [INFO][4870] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.195/26] block=192.168.65.192/26 handle="k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.754 [INFO][4870] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.195/26] handle="k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.754 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:19.822041 containerd[1476]: 2025-05-10 00:22:19.754 [INFO][4870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.195/26] IPv6=[] ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" HandleID="k8s-pod-network.db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.822750 containerd[1476]: 2025-05-10 00:22:19.760 [INFO][4845] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Namespace="calico-system" Pod="csi-node-driver-55f69" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fe5f5d0-455a-4b01-9791-41d2483185a9", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"", Pod:"csi-node-driver-55f69", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2ba0503a99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:19.822750 containerd[1476]: 2025-05-10 00:22:19.762 [INFO][4845] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.195/32] ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Namespace="calico-system" Pod="csi-node-driver-55f69" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.822750 containerd[1476]: 2025-05-10 00:22:19.763 [INFO][4845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2ba0503a99 ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Namespace="calico-system" Pod="csi-node-driver-55f69" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.822750 containerd[1476]: 2025-05-10 00:22:19.777 [INFO][4845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Namespace="calico-system" Pod="csi-node-driver-55f69" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.822750 containerd[1476]: 2025-05-10 00:22:19.782 [INFO][4845] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Namespace="calico-system" Pod="csi-node-driver-55f69" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fe5f5d0-455a-4b01-9791-41d2483185a9", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d", Pod:"csi-node-driver-55f69", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2ba0503a99", MAC:"8e:f0:87:2a:4d:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:19.822750 containerd[1476]: 2025-05-10 00:22:19.816 [INFO][4845] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d" Namespace="calico-system" Pod="csi-node-driver-55f69" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:19.861883 containerd[1476]: time="2025-05-10T00:22:19.861784300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:22:19.861883 containerd[1476]: time="2025-05-10T00:22:19.861844701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:22:19.861883 containerd[1476]: time="2025-05-10T00:22:19.861861501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:19.862717 containerd[1476]: time="2025-05-10T00:22:19.861937981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:19.877098 systemd-networkd[1367]: calia53702697d4: Link UP May 10 00:22:19.877348 systemd-networkd[1367]: calia53702697d4: Gained carrier May 10 00:22:19.921559 systemd[1]: Started cri-containerd-db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d.scope - libcontainer container db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d. May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.532 [INFO][4836] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.558 [INFO][4836] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0 calico-apiserver-5ff6889754- calico-apiserver a161c57b-ec58-4358-a6d8-144a63406336 754 0 2025-05-10 00:21:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ff6889754 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-2389c948d4 calico-apiserver-5ff6889754-2dhwk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia53702697d4 [] []}} ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-2dhwk" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.559 [INFO][4836] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-2dhwk" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.655 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" HandleID="k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.675 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" HandleID="k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aa260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-2389c948d4", "pod":"calico-apiserver-5ff6889754-2dhwk", "timestamp":"2025-05-10 00:22:19.655203801 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2389c948d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.675 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.754 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.754 [INFO][4876] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2389c948d4' May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.765 [INFO][4876] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.783 [INFO][4876] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.805 [INFO][4876] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.809 [INFO][4876] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.818 [INFO][4876] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.818 [INFO][4876] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.821 [INFO][4876] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.835 [INFO][4876] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.867 [INFO][4876] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.196/26] block=192.168.65.192/26 handle="k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.867 [INFO][4876] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.196/26] handle="k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.867 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:19.928323 containerd[1476]: 2025-05-10 00:22:19.867 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.196/26] IPv6=[] ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" HandleID="k8s-pod-network.eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.928933 containerd[1476]: 2025-05-10 00:22:19.871 [INFO][4836] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-2dhwk" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0", GenerateName:"calico-apiserver-5ff6889754-", Namespace:"calico-apiserver", SelfLink:"", UID:"a161c57b-ec58-4358-a6d8-144a63406336", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff6889754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"", Pod:"calico-apiserver-5ff6889754-2dhwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia53702697d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:19.928933 containerd[1476]: 2025-05-10 00:22:19.871 [INFO][4836] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.196/32] ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-2dhwk" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.928933 containerd[1476]: 2025-05-10 00:22:19.871 [INFO][4836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia53702697d4 ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-2dhwk" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.928933 containerd[1476]: 2025-05-10 00:22:19.881 [INFO][4836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-2dhwk" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.928933 containerd[1476]: 2025-05-10 00:22:19.881 [INFO][4836] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-2dhwk" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0", GenerateName:"calico-apiserver-5ff6889754-", Namespace:"calico-apiserver", SelfLink:"", UID:"a161c57b-ec58-4358-a6d8-144a63406336", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff6889754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb", Pod:"calico-apiserver-5ff6889754-2dhwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia53702697d4", MAC:"82:62:90:b5:95:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:19.928933 containerd[1476]: 2025-05-10 00:22:19.907 [INFO][4836] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-2dhwk" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:19.956860 containerd[1476]: time="2025-05-10T00:22:19.956627357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h4s4c,Uid:ea774e10-d5e8-49b4-87ba-cd65e20304d9,Namespace:kube-system,Attempt:1,} returns sandbox id \"448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7\"" May 10 00:22:19.957583 systemd[1]: Started sshd@115-91.107.204.139:22-60.164.133.37:60730.service - OpenSSH per-connection server daemon (60.164.133.37:60730). May 10 00:22:19.974068 containerd[1476]: time="2025-05-10T00:22:19.973671544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:22:19.974068 containerd[1476]: time="2025-05-10T00:22:19.973718425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:22:19.974068 containerd[1476]: time="2025-05-10T00:22:19.973728825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:19.974068 containerd[1476]: time="2025-05-10T00:22:19.973803265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:19.989311 containerd[1476]: time="2025-05-10T00:22:19.989260806Z" level=info msg="CreateContainer within sandbox \"448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:22:20.007931 systemd[1]: Started cri-containerd-eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb.scope - libcontainer container eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb. May 10 00:22:20.020931 containerd[1476]: time="2025-05-10T00:22:20.020861250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-55f69,Uid:9fe5f5d0-455a-4b01-9791-41d2483185a9,Namespace:calico-system,Attempt:1,} returns sandbox id \"db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d\"" May 10 00:22:20.022257 containerd[1476]: time="2025-05-10T00:22:20.021202812Z" level=info msg="CreateContainer within sandbox \"448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59ef646656bc58d840ae0dfd3be1d1c11e1bb31cec45ad10068b4c1e251172f1\"" May 10 00:22:20.023817 containerd[1476]: time="2025-05-10T00:22:20.023778462Z" level=info msg="StartContainer for \"59ef646656bc58d840ae0dfd3be1d1c11e1bb31cec45ad10068b4c1e251172f1\"" May 10 00:22:20.054526 systemd[1]: Started cri-containerd-59ef646656bc58d840ae0dfd3be1d1c11e1bb31cec45ad10068b4c1e251172f1.scope - libcontainer container 59ef646656bc58d840ae0dfd3be1d1c11e1bb31cec45ad10068b4c1e251172f1. May 10 00:22:20.064101 containerd[1476]: time="2025-05-10T00:22:20.064065939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff6889754-2dhwk,Uid:a161c57b-ec58-4358-a6d8-144a63406336,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb\"" May 10 00:22:20.089244 containerd[1476]: time="2025-05-10T00:22:20.089205518Z" level=info msg="StartContainer for \"59ef646656bc58d840ae0dfd3be1d1c11e1bb31cec45ad10068b4c1e251172f1\" returns successfully" May 10 00:22:20.212319 containerd[1476]: time="2025-05-10T00:22:20.210853313Z" level=info msg="StopPodSandbox for \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\"" May 10 00:22:20.325777 kubelet[3104]: I0510 00:22:20.324600 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.289 [INFO][5110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.289 [INFO][5110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" iface="eth0" netns="/var/run/netns/cni-0f2b70a0-07db-cd8b-2478-4386bbcf3eb9" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.289 [INFO][5110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" iface="eth0" netns="/var/run/netns/cni-0f2b70a0-07db-cd8b-2478-4386bbcf3eb9" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.289 [INFO][5110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" iface="eth0" netns="/var/run/netns/cni-0f2b70a0-07db-cd8b-2478-4386bbcf3eb9" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.289 [INFO][5110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.289 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.316 [INFO][5122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.317 [INFO][5122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.317 [INFO][5122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.334 [WARNING][5122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.335 [INFO][5122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.339 [INFO][5122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:20.345380 containerd[1476]: 2025-05-10 00:22:20.343 [INFO][5110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:20.347591 containerd[1476]: time="2025-05-10T00:22:20.345500959Z" level=info msg="TearDown network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\" successfully" May 10 00:22:20.347591 containerd[1476]: time="2025-05-10T00:22:20.345528399Z" level=info msg="StopPodSandbox for \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\" returns successfully" May 10 00:22:20.347591 containerd[1476]: time="2025-05-10T00:22:20.346425003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff6889754-226jl,Uid:42b78a76-4b54-49d0-9136-33da1a2535fc,Namespace:calico-apiserver,Attempt:1,}" May 10 00:22:20.413694 systemd[1]: run-netns-cni\x2d0f2b70a0\x2d07db\x2dcd8b\x2d2478\x2d4386bbcf3eb9.mount: Deactivated successfully. May 10 00:22:20.502626 kubelet[3104]: I0510 00:22:20.501745 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h4s4c" podStartSLOduration=33.501726049 podStartE2EDuration="33.501726049s" podCreationTimestamp="2025-05-10 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:22:20.501199727 +0000 UTC m=+47.420399362" watchObservedRunningTime="2025-05-10 00:22:20.501726049 +0000 UTC m=+47.420925644" May 10 00:22:20.543133 systemd-networkd[1367]: calibb0a50455a5: Link UP May 10 00:22:20.543942 systemd-networkd[1367]: calibb0a50455a5: Gained carrier May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.395 [INFO][5128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.429 [INFO][5128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0 calico-apiserver-5ff6889754- calico-apiserver 42b78a76-4b54-49d0-9136-33da1a2535fc 770 0 2025-05-10 00:21:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ff6889754 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-2389c948d4 calico-apiserver-5ff6889754-226jl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibb0a50455a5 [] []}} ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-226jl" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.429 [INFO][5128] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-226jl" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.465 [INFO][5143] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" HandleID="k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.488 [INFO][5143] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" HandleID="k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000222b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-2389c948d4", "pod":"calico-apiserver-5ff6889754-226jl", "timestamp":"2025-05-10 00:22:20.465383387 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2389c948d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.488 [INFO][5143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.488 [INFO][5143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.488 [INFO][5143] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2389c948d4' May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.492 [INFO][5143] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.500 [INFO][5143] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.513 [INFO][5143] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.515 [INFO][5143] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.518 [INFO][5143] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.518 [INFO][5143] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.520 [INFO][5143] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.530 [INFO][5143] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.538 [INFO][5143] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.197/26] block=192.168.65.192/26 handle="k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.538 [INFO][5143] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.197/26] handle="k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.538 [INFO][5143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:20.558125 containerd[1476]: 2025-05-10 00:22:20.538 [INFO][5143] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.197/26] IPv6=[] ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" HandleID="k8s-pod-network.480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.560248 containerd[1476]: 2025-05-10 00:22:20.541 [INFO][5128] cni-plugin/k8s.go 386: Populated endpoint ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-226jl" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0", GenerateName:"calico-apiserver-5ff6889754-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b78a76-4b54-49d0-9136-33da1a2535fc", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff6889754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"", Pod:"calico-apiserver-5ff6889754-226jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb0a50455a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:20.560248 containerd[1476]: 2025-05-10 00:22:20.541 [INFO][5128] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.197/32] ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-226jl" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.560248 containerd[1476]: 2025-05-10 00:22:20.541 [INFO][5128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb0a50455a5 ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-226jl" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.560248 containerd[1476]: 2025-05-10 00:22:20.543 [INFO][5128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-226jl" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.560248 containerd[1476]: 2025-05-10 00:22:20.544 [INFO][5128] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-226jl" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0", GenerateName:"calico-apiserver-5ff6889754-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b78a76-4b54-49d0-9136-33da1a2535fc", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff6889754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e", Pod:"calico-apiserver-5ff6889754-226jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb0a50455a5", MAC:"92:08:99:58:6f:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:20.560248 containerd[1476]: 2025-05-10 00:22:20.553 [INFO][5128] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e" Namespace="calico-apiserver" Pod="calico-apiserver-5ff6889754-226jl" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:20.582619 containerd[1476]: time="2025-05-10T00:22:20.582317924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:22:20.582619 containerd[1476]: time="2025-05-10T00:22:20.582393725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:22:20.582619 containerd[1476]: time="2025-05-10T00:22:20.582412045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:20.582619 containerd[1476]: time="2025-05-10T00:22:20.582564965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:20.614614 systemd[1]: Started cri-containerd-480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e.scope - libcontainer container 480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e. May 10 00:22:20.656019 containerd[1476]: time="2025-05-10T00:22:20.655961492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ff6889754-226jl,Uid:42b78a76-4b54-49d0-9136-33da1a2535fc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e\"" May 10 00:22:20.891674 systemd-networkd[1367]: calia53702697d4: Gained IPv6LL May 10 00:22:20.892387 systemd-networkd[1367]: calic2ba0503a99: Gained IPv6LL May 10 00:22:20.938932 sshd[4987]: Connection closed by authenticating user root 60.164.133.37 port 60730 [preauth] May 10 00:22:20.945217 systemd[1]: sshd@115-91.107.204.139:22-60.164.133.37:60730.service: Deactivated successfully. May 10 00:22:20.955399 systemd-networkd[1367]: cali3db4b576531: Gained IPv6LL May 10 00:22:21.144752 systemd[1]: Started sshd@116-91.107.204.139:22-60.164.133.37:34106.service - OpenSSH per-connection server daemon (60.164.133.37:34106). May 10 00:22:21.221091 containerd[1476]: time="2025-05-10T00:22:21.221055127Z" level=info msg="StopPodSandbox for \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\"" May 10 00:22:21.265322 kernel: bpftool[5249]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 10 00:22:21.272135 containerd[1476]: time="2025-05-10T00:22:21.270214876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:21.285838 containerd[1476]: time="2025-05-10T00:22:21.285785816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 10 00:22:21.313209 containerd[1476]: time="2025-05-10T00:22:21.311551675Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:21.318655 containerd[1476]: time="2025-05-10T00:22:21.318605742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:21.337323 containerd[1476]: time="2025-05-10T00:22:21.337081174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.682165176s" May 10 00:22:21.337453 containerd[1476]: time="2025-05-10T00:22:21.337332175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 10 00:22:21.339431 containerd[1476]: time="2025-05-10T00:22:21.339397422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 10 00:22:21.376583 containerd[1476]: time="2025-05-10T00:22:21.376540725Z" level=info msg="CreateContainer within sandbox \"ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 10 00:22:21.401894 containerd[1476]: time="2025-05-10T00:22:21.395683519Z" level=info msg="CreateContainer within sandbox \"ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d8194b1f7c7419c5f1292efd6309c32f21b151a1e1dc88c7bd11ec7c8456f0dc\"" May 10 00:22:21.401894 containerd[1476]: time="2025-05-10T00:22:21.398488570Z" level=info msg="StartContainer for \"d8194b1f7c7419c5f1292efd6309c32f21b151a1e1dc88c7bd11ec7c8456f0dc\"" May 10 00:22:21.478563 systemd[1]: Started cri-containerd-d8194b1f7c7419c5f1292efd6309c32f21b151a1e1dc88c7bd11ec7c8456f0dc.scope - libcontainer container d8194b1f7c7419c5f1292efd6309c32f21b151a1e1dc88c7bd11ec7c8456f0dc. May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.363 [INFO][5246] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.363 [INFO][5246] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" iface="eth0" netns="/var/run/netns/cni-d49d45cc-ee6f-1b46-4e50-1ebd71af4657" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.363 [INFO][5246] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" iface="eth0" netns="/var/run/netns/cni-d49d45cc-ee6f-1b46-4e50-1ebd71af4657" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.363 [INFO][5246] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" iface="eth0" netns="/var/run/netns/cni-d49d45cc-ee6f-1b46-4e50-1ebd71af4657" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.363 [INFO][5246] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.364 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.438 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.441 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.441 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.468 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.468 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.476 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:21.489348 containerd[1476]: 2025-05-10 00:22:21.481 [INFO][5246] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:21.494254 systemd[1]: run-netns-cni\x2dd49d45cc\x2dee6f\x2d1b46\x2d4e50\x2d1ebd71af4657.mount: Deactivated successfully. May 10 00:22:21.495003 containerd[1476]: time="2025-05-10T00:22:21.494417019Z" level=info msg="TearDown network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\" successfully" May 10 00:22:21.495003 containerd[1476]: time="2025-05-10T00:22:21.494444179Z" level=info msg="StopPodSandbox for \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\" returns successfully" May 10 00:22:21.498329 containerd[1476]: time="2025-05-10T00:22:21.495940705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ffbwh,Uid:e1040029-c5db-4540-b478-e7a12da2dd29,Namespace:kube-system,Attempt:1,}" May 10 00:22:21.576215 containerd[1476]: time="2025-05-10T00:22:21.576178414Z" level=info msg="StartContainer for \"d8194b1f7c7419c5f1292efd6309c32f21b151a1e1dc88c7bd11ec7c8456f0dc\" returns successfully" May 10 00:22:21.715279 systemd-networkd[1367]: calif0deb962a7e: Link UP May 10 00:22:21.717590 systemd-networkd[1367]: calif0deb962a7e: Gained carrier May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.593 [INFO][5303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0 coredns-7db6d8ff4d- kube-system e1040029-c5db-4540-b478-e7a12da2dd29 794 0 2025-05-10 00:21:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-2389c948d4 coredns-7db6d8ff4d-ffbwh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif0deb962a7e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ffbwh" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.593 [INFO][5303] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ffbwh" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.649 [INFO][5325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" HandleID="k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.667 [INFO][5325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" HandleID="k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334d90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-2389c948d4", "pod":"coredns-7db6d8ff4d-ffbwh", "timestamp":"2025-05-10 00:22:21.649878617 +0000 UTC"}, Hostname:"ci-4081-3-3-n-2389c948d4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.668 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.668 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.669 [INFO][5325] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-2389c948d4' May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.671 [INFO][5325] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.677 [INFO][5325] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.683 [INFO][5325] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.686 [INFO][5325] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.689 [INFO][5325] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.689 [INFO][5325] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.691 [INFO][5325] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.695 [INFO][5325] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.706 [INFO][5325] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.198/26] block=192.168.65.192/26 handle="k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.707 [INFO][5325] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.198/26] handle="k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" host="ci-4081-3-3-n-2389c948d4" May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.707 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:21.739432 containerd[1476]: 2025-05-10 00:22:21.707 [INFO][5325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.198/26] IPv6=[] ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" HandleID="k8s-pod-network.be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.740056 containerd[1476]: 2025-05-10 00:22:21.711 [INFO][5303] cni-plugin/k8s.go 386: Populated endpoint ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ffbwh" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e1040029-c5db-4540-b478-e7a12da2dd29", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"", Pod:"coredns-7db6d8ff4d-ffbwh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0deb962a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:21.740056 containerd[1476]: 2025-05-10 00:22:21.711 [INFO][5303] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.198/32] ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ffbwh" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.740056 containerd[1476]: 2025-05-10 00:22:21.711 [INFO][5303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0deb962a7e ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ffbwh" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.740056 containerd[1476]: 2025-05-10 00:22:21.715 [INFO][5303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ffbwh" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.740056 containerd[1476]: 2025-05-10 00:22:21.717 [INFO][5303] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ffbwh" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e1040029-c5db-4540-b478-e7a12da2dd29", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a", Pod:"coredns-7db6d8ff4d-ffbwh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0deb962a7e", MAC:"96:c9:91:42:e0:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:21.740056 containerd[1476]: 2025-05-10 00:22:21.734 [INFO][5303] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-ffbwh" WorkloadEndpoint="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:21.788418 containerd[1476]: time="2025-05-10T00:22:21.786816184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:22:21.789124 containerd[1476]: time="2025-05-10T00:22:21.788318750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:22:21.789124 containerd[1476]: time="2025-05-10T00:22:21.788915033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:21.789124 containerd[1476]: time="2025-05-10T00:22:21.789055753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:22:21.809533 systemd[1]: Started cri-containerd-be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a.scope - libcontainer container be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a. May 10 00:22:21.856969 containerd[1476]: time="2025-05-10T00:22:21.856866654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ffbwh,Uid:e1040029-c5db-4540-b478-e7a12da2dd29,Namespace:kube-system,Attempt:1,} returns sandbox id \"be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a\"" May 10 00:22:21.861738 containerd[1476]: time="2025-05-10T00:22:21.861630072Z" level=info msg="CreateContainer within sandbox \"be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:22:21.880252 containerd[1476]: time="2025-05-10T00:22:21.880116664Z" level=info msg="CreateContainer within sandbox \"be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e00683e6e0c71e75b12f647cc2d169a7520389c589be14fca634212a462e4bc2\"" May 10 00:22:21.882498 containerd[1476]: time="2025-05-10T00:22:21.881467469Z" level=info msg="StartContainer for \"e00683e6e0c71e75b12f647cc2d169a7520389c589be14fca634212a462e4bc2\"" May 10 00:22:21.935561 systemd[1]: Started cri-containerd-e00683e6e0c71e75b12f647cc2d169a7520389c589be14fca634212a462e4bc2.scope - libcontainer container e00683e6e0c71e75b12f647cc2d169a7520389c589be14fca634212a462e4bc2. May 10 00:22:21.974034 containerd[1476]: time="2025-05-10T00:22:21.973941385Z" level=info msg="StartContainer for \"e00683e6e0c71e75b12f647cc2d169a7520389c589be14fca634212a462e4bc2\" returns successfully" May 10 00:22:21.981197 systemd-networkd[1367]: vxlan.calico: Link UP May 10 00:22:21.981204 systemd-networkd[1367]: vxlan.calico: Gained carrier May 10 00:22:22.095416 sshd[5218]: Connection closed by authenticating user root 60.164.133.37 port 34106 [preauth] May 10 00:22:22.098932 systemd[1]: sshd@116-91.107.204.139:22-60.164.133.37:34106.service: Deactivated successfully. May 10 00:22:22.236371 systemd-networkd[1367]: calibb0a50455a5: Gained IPv6LL May 10 00:22:22.297227 systemd[1]: Started sshd@117-91.107.204.139:22-60.164.133.37:35388.service - OpenSSH per-connection server daemon (60.164.133.37:35388). May 10 00:22:22.534576 kubelet[3104]: I0510 00:22:22.534351 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84fbc7f9c5-z5595" podStartSLOduration=24.850292888 podStartE2EDuration="28.534329431s" podCreationTimestamp="2025-05-10 00:21:54 +0000 UTC" firstStartedPulling="2025-05-10 00:22:17.654649877 +0000 UTC m=+44.573849472" lastFinishedPulling="2025-05-10 00:22:21.33868642 +0000 UTC m=+48.257886015" observedRunningTime="2025-05-10 00:22:22.524622594 +0000 UTC m=+49.443822149" watchObservedRunningTime="2025-05-10 00:22:22.534329431 +0000 UTC m=+49.453529026" May 10 00:22:22.551678 kubelet[3104]: I0510 00:22:22.551074 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ffbwh" podStartSLOduration=35.551057694 podStartE2EDuration="35.551057694s" podCreationTimestamp="2025-05-10 00:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:22:22.550272811 +0000 UTC m=+49.469472406" watchObservedRunningTime="2025-05-10 00:22:22.551057694 +0000 UTC m=+49.470257289" May 10 00:22:22.878592 systemd-networkd[1367]: calif0deb962a7e: Gained IPv6LL May 10 00:22:23.032035 containerd[1476]: time="2025-05-10T00:22:23.031969436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:23.033048 containerd[1476]: time="2025-05-10T00:22:23.033007960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 10 00:22:23.034844 containerd[1476]: time="2025-05-10T00:22:23.033828083Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:23.036910 containerd[1476]: time="2025-05-10T00:22:23.036463213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:23.037398 containerd[1476]: time="2025-05-10T00:22:23.037370096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.697471472s" May 10 00:22:23.037550 containerd[1476]: time="2025-05-10T00:22:23.037529497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 10 00:22:23.041466 containerd[1476]: time="2025-05-10T00:22:23.040910950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:22:23.042511 containerd[1476]: time="2025-05-10T00:22:23.042333955Z" level=info msg="CreateContainer within sandbox \"db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 10 00:22:23.083998 containerd[1476]: time="2025-05-10T00:22:23.083957070Z" level=info msg="CreateContainer within sandbox \"db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6aa5315beeef6de7cdec11790cdcadead04ed07236ef2713f64746f4eb2e440f\"" May 10 00:22:23.084991 containerd[1476]: time="2025-05-10T00:22:23.084952154Z" level=info msg="StartContainer for \"6aa5315beeef6de7cdec11790cdcadead04ed07236ef2713f64746f4eb2e440f\"" May 10 00:22:23.119501 systemd[1]: Started cri-containerd-6aa5315beeef6de7cdec11790cdcadead04ed07236ef2713f64746f4eb2e440f.scope - libcontainer container 6aa5315beeef6de7cdec11790cdcadead04ed07236ef2713f64746f4eb2e440f. May 10 00:22:23.158910 containerd[1476]: time="2025-05-10T00:22:23.158811110Z" level=info msg="StartContainer for \"6aa5315beeef6de7cdec11790cdcadead04ed07236ef2713f64746f4eb2e440f\" returns successfully" May 10 00:22:23.260671 sshd[5521]: Connection closed by authenticating user root 60.164.133.37 port 35388 [preauth] May 10 00:22:23.263154 systemd[1]: sshd@117-91.107.204.139:22-60.164.133.37:35388.service: Deactivated successfully. May 10 00:22:23.469026 systemd[1]: Started sshd@118-91.107.204.139:22-60.164.133.37:36722.service - OpenSSH per-connection server daemon (60.164.133.37:36722). May 10 00:22:23.899911 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL May 10 00:22:24.429191 sshd[5590]: Connection closed by authenticating user root 60.164.133.37 port 36722 [preauth] May 10 00:22:24.432844 systemd[1]: sshd@118-91.107.204.139:22-60.164.133.37:36722.service: Deactivated successfully. May 10 00:22:24.628856 systemd[1]: Started sshd@119-91.107.204.139:22-60.164.133.37:37984.service - OpenSSH per-connection server daemon (60.164.133.37:37984). May 10 00:22:25.560849 sshd[5597]: Connection closed by authenticating user root 60.164.133.37 port 37984 [preauth] May 10 00:22:25.563224 systemd[1]: sshd@119-91.107.204.139:22-60.164.133.37:37984.service: Deactivated successfully. May 10 00:22:25.770790 systemd[1]: Started sshd@120-91.107.204.139:22-60.164.133.37:39542.service - OpenSSH per-connection server daemon (60.164.133.37:39542). May 10 00:22:26.688318 containerd[1476]: time="2025-05-10T00:22:26.688226354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:26.690017 containerd[1476]: time="2025-05-10T00:22:26.689952200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 10 00:22:26.691074 containerd[1476]: time="2025-05-10T00:22:26.691014724Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:26.694201 containerd[1476]: time="2025-05-10T00:22:26.694150015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:26.695870 containerd[1476]: time="2025-05-10T00:22:26.695819221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 3.654859671s" May 10 00:22:26.695870 containerd[1476]: time="2025-05-10T00:22:26.695864061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:22:26.697710 containerd[1476]: time="2025-05-10T00:22:26.697383347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 10 00:22:26.700790 containerd[1476]: time="2025-05-10T00:22:26.700603318Z" level=info msg="CreateContainer within sandbox \"eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:22:26.723355 containerd[1476]: time="2025-05-10T00:22:26.723201999Z" level=info msg="CreateContainer within sandbox \"eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ca6a0ffeb922469e1a2352c84a98955c0ac19a990bab118987775954c9e8f7c\"" May 10 00:22:26.723981 sshd[5603]: Connection closed by authenticating user root 60.164.133.37 port 39542 [preauth] May 10 00:22:26.725532 containerd[1476]: time="2025-05-10T00:22:26.725385847Z" level=info msg="StartContainer for \"0ca6a0ffeb922469e1a2352c84a98955c0ac19a990bab118987775954c9e8f7c\"" May 10 00:22:26.734260 systemd[1]: sshd@120-91.107.204.139:22-60.164.133.37:39542.service: Deactivated successfully. May 10 00:22:26.763386 systemd[1]: run-containerd-runc-k8s.io-0ca6a0ffeb922469e1a2352c84a98955c0ac19a990bab118987775954c9e8f7c-runc.dU8hpx.mount: Deactivated successfully. May 10 00:22:26.772497 systemd[1]: Started cri-containerd-0ca6a0ffeb922469e1a2352c84a98955c0ac19a990bab118987775954c9e8f7c.scope - libcontainer container 0ca6a0ffeb922469e1a2352c84a98955c0ac19a990bab118987775954c9e8f7c. May 10 00:22:26.814975 containerd[1476]: time="2025-05-10T00:22:26.814886048Z" level=info msg="StartContainer for \"0ca6a0ffeb922469e1a2352c84a98955c0ac19a990bab118987775954c9e8f7c\" returns successfully" May 10 00:22:26.931393 systemd[1]: Started sshd@121-91.107.204.139:22-60.164.133.37:40978.service - OpenSSH per-connection server daemon (60.164.133.37:40978). May 10 00:22:27.094879 containerd[1476]: time="2025-05-10T00:22:27.094809286Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:27.097938 containerd[1476]: time="2025-05-10T00:22:27.096821373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 10 00:22:27.101414 containerd[1476]: time="2025-05-10T00:22:27.101369989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 403.946402ms" May 10 00:22:27.101414 containerd[1476]: time="2025-05-10T00:22:27.101411229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 10 00:22:27.103763 containerd[1476]: time="2025-05-10T00:22:27.102230232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 10 00:22:27.106664 containerd[1476]: time="2025-05-10T00:22:27.106612888Z" level=info msg="CreateContainer within sandbox \"480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 10 00:22:27.145617 containerd[1476]: time="2025-05-10T00:22:27.145463985Z" level=info msg="CreateContainer within sandbox \"480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b3066de8aa322462521b667cbe95719a0f86c9cf9aa2667688c4e8c57769268e\"" May 10 00:22:27.147582 containerd[1476]: time="2025-05-10T00:22:27.147531192Z" level=info msg="StartContainer for \"b3066de8aa322462521b667cbe95719a0f86c9cf9aa2667688c4e8c57769268e\"" May 10 00:22:27.181575 systemd[1]: Started cri-containerd-b3066de8aa322462521b667cbe95719a0f86c9cf9aa2667688c4e8c57769268e.scope - libcontainer container b3066de8aa322462521b667cbe95719a0f86c9cf9aa2667688c4e8c57769268e. May 10 00:22:27.283753 containerd[1476]: time="2025-05-10T00:22:27.283713593Z" level=info msg="StartContainer for \"b3066de8aa322462521b667cbe95719a0f86c9cf9aa2667688c4e8c57769268e\" returns successfully" May 10 00:22:27.544352 kubelet[3104]: I0510 00:22:27.544098 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ff6889754-2dhwk" podStartSLOduration=26.913918798 podStartE2EDuration="33.544083393s" podCreationTimestamp="2025-05-10 00:21:54 +0000 UTC" firstStartedPulling="2025-05-10 00:22:20.066533949 +0000 UTC m=+46.985733544" lastFinishedPulling="2025-05-10 00:22:26.696698504 +0000 UTC m=+53.615898139" observedRunningTime="2025-05-10 00:22:27.541562144 +0000 UTC m=+54.460761739" watchObservedRunningTime="2025-05-10 00:22:27.544083393 +0000 UTC m=+54.463282988" May 10 00:22:27.717128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887339782.mount: Deactivated successfully. May 10 00:22:27.890140 sshd[5651]: Connection closed by authenticating user root 60.164.133.37 port 40978 [preauth] May 10 00:22:27.894719 systemd[1]: sshd@121-91.107.204.139:22-60.164.133.37:40978.service: Deactivated successfully. May 10 00:22:28.094577 systemd[1]: Started sshd@122-91.107.204.139:22-60.164.133.37:42302.service - OpenSSH per-connection server daemon (60.164.133.37:42302). May 10 00:22:28.535340 kubelet[3104]: I0510 00:22:28.534678 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:22:29.034216 containerd[1476]: time="2025-05-10T00:22:29.034164928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:29.035830 containerd[1476]: time="2025-05-10T00:22:29.035757134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 10 00:22:29.038243 containerd[1476]: time="2025-05-10T00:22:29.037597740Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:29.038573 containerd[1476]: time="2025-05-10T00:22:29.038542223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 10 00:22:29.039344 containerd[1476]: time="2025-05-10T00:22:29.039287706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.937024274s" May 10 00:22:29.040106 containerd[1476]: time="2025-05-10T00:22:29.040079429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 10 00:22:29.044915 containerd[1476]: time="2025-05-10T00:22:29.044768085Z" level=info msg="CreateContainer within sandbox \"db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 10 00:22:29.062039 sshd[5704]: Connection closed by authenticating user root 60.164.133.37 port 42302 [preauth] May 10 00:22:29.069034 systemd[1]: sshd@122-91.107.204.139:22-60.164.133.37:42302.service: Deactivated successfully. May 10 00:22:29.071281 containerd[1476]: time="2025-05-10T00:22:29.069809971Z" level=info msg="CreateContainer within sandbox \"db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6e8456367c5c15a704ce7067676c6738d167b0e4ece56aa618c9943c6c7c211b\"" May 10 00:22:29.071281 containerd[1476]: time="2025-05-10T00:22:29.070777974Z" level=info msg="StartContainer for \"6e8456367c5c15a704ce7067676c6738d167b0e4ece56aa618c9943c6c7c211b\"" May 10 00:22:29.116449 kubelet[3104]: I0510 00:22:29.115779 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ff6889754-226jl" podStartSLOduration=28.671334716 podStartE2EDuration="35.115758129s" podCreationTimestamp="2025-05-10 00:21:54 +0000 UTC" firstStartedPulling="2025-05-10 00:22:20.65802754 +0000 UTC m=+47.577227135" lastFinishedPulling="2025-05-10 00:22:27.102450953 +0000 UTC m=+54.021650548" observedRunningTime="2025-05-10 00:22:27.556964039 +0000 UTC m=+54.476163634" watchObservedRunningTime="2025-05-10 00:22:29.115758129 +0000 UTC m=+56.034957724" May 10 00:22:29.132853 systemd[1]: run-containerd-runc-k8s.io-6e8456367c5c15a704ce7067676c6738d167b0e4ece56aa618c9943c6c7c211b-runc.pjKQHW.mount: Deactivated successfully. May 10 00:22:29.146531 systemd[1]: Started cri-containerd-6e8456367c5c15a704ce7067676c6738d167b0e4ece56aa618c9943c6c7c211b.scope - libcontainer container 6e8456367c5c15a704ce7067676c6738d167b0e4ece56aa618c9943c6c7c211b. May 10 00:22:29.190887 containerd[1476]: time="2025-05-10T00:22:29.190840627Z" level=info msg="StartContainer for \"6e8456367c5c15a704ce7067676c6738d167b0e4ece56aa618c9943c6c7c211b\" returns successfully" May 10 00:22:29.267718 systemd[1]: Started sshd@123-91.107.204.139:22-60.164.133.37:43694.service - OpenSSH per-connection server daemon (60.164.133.37:43694). May 10 00:22:29.337554 kubelet[3104]: I0510 00:22:29.337440 3104 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 10 00:22:29.340812 kubelet[3104]: I0510 00:22:29.340665 3104 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 10 00:22:30.230851 sshd[5751]: Connection closed by authenticating user root 60.164.133.37 port 43694 [preauth] May 10 00:22:30.234712 systemd[1]: sshd@123-91.107.204.139:22-60.164.133.37:43694.service: Deactivated successfully. May 10 00:22:30.440587 systemd[1]: Started sshd@124-91.107.204.139:22-60.164.133.37:45054.service - OpenSSH per-connection server daemon (60.164.133.37:45054). May 10 00:22:30.940151 kubelet[3104]: I0510 00:22:30.940088 3104 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 10 00:22:30.972174 kubelet[3104]: I0510 00:22:30.972056 3104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-55f69" podStartSLOduration=27.955937349 podStartE2EDuration="36.972022394s" podCreationTimestamp="2025-05-10 00:21:54 +0000 UTC" firstStartedPulling="2025-05-10 00:22:20.024804146 +0000 UTC m=+46.944003701" lastFinishedPulling="2025-05-10 00:22:29.040889151 +0000 UTC m=+55.960088746" observedRunningTime="2025-05-10 00:22:29.556712046 +0000 UTC m=+56.475911681" watchObservedRunningTime="2025-05-10 00:22:30.972022394 +0000 UTC m=+57.891221989" May 10 00:22:31.419057 sshd[5757]: Connection closed by authenticating user root 60.164.133.37 port 45054 [preauth] May 10 00:22:31.422857 systemd[1]: sshd@124-91.107.204.139:22-60.164.133.37:45054.service: Deactivated successfully. May 10 00:22:31.616688 systemd[1]: Started sshd@125-91.107.204.139:22-60.164.133.37:46458.service - OpenSSH per-connection server daemon (60.164.133.37:46458). May 10 00:22:32.550607 sshd[5764]: Connection closed by authenticating user root 60.164.133.37 port 46458 [preauth] May 10 00:22:32.553766 systemd[1]: sshd@125-91.107.204.139:22-60.164.133.37:46458.service: Deactivated successfully. May 10 00:22:32.756570 systemd[1]: Started sshd@126-91.107.204.139:22-60.164.133.37:47904.service - OpenSSH per-connection server daemon (60.164.133.37:47904). May 10 00:22:33.234409 containerd[1476]: time="2025-05-10T00:22:33.234364003Z" level=info msg="StopPodSandbox for \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\"" May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.306 [WARNING][5793] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0", GenerateName:"calico-apiserver-5ff6889754-", Namespace:"calico-apiserver", SelfLink:"", UID:"a161c57b-ec58-4358-a6d8-144a63406336", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff6889754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb", Pod:"calico-apiserver-5ff6889754-2dhwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia53702697d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.306 [INFO][5793] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.307 [INFO][5793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" iface="eth0" netns="" May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.307 [INFO][5793] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.307 [INFO][5793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.338 [INFO][5801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.338 [INFO][5801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.338 [INFO][5801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.348 [WARNING][5801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.348 [INFO][5801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.350 [INFO][5801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:33.356584 containerd[1476]: 2025-05-10 00:22:33.353 [INFO][5793] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:33.357692 containerd[1476]: time="2025-05-10T00:22:33.356672523Z" level=info msg="TearDown network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\" successfully" May 10 00:22:33.357692 containerd[1476]: time="2025-05-10T00:22:33.356744043Z" level=info msg="StopPodSandbox for \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\" returns successfully" May 10 00:22:33.358287 containerd[1476]: time="2025-05-10T00:22:33.357978087Z" level=info msg="RemovePodSandbox for \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\"" May 10 00:22:33.358287 containerd[1476]: time="2025-05-10T00:22:33.358017407Z" level=info msg="Forcibly stopping sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\"" May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.410 [WARNING][5820] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0", GenerateName:"calico-apiserver-5ff6889754-", Namespace:"calico-apiserver", SelfLink:"", UID:"a161c57b-ec58-4358-a6d8-144a63406336", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff6889754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"eb9130f46a99f8da9752898c5ec1f3c77ba27e136ebfbeba6eafc5d1978ceccb", Pod:"calico-apiserver-5ff6889754-2dhwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia53702697d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.411 [INFO][5820] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.411 [INFO][5820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" iface="eth0" netns="" May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.411 [INFO][5820] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.411 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.433 [INFO][5827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.433 [INFO][5827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.433 [INFO][5827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.445 [WARNING][5827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.445 [INFO][5827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" HandleID="k8s-pod-network.4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--2dhwk-eth0" May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.447 [INFO][5827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:33.452184 containerd[1476]: 2025-05-10 00:22:33.449 [INFO][5820] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041" May 10 00:22:33.453982 containerd[1476]: time="2025-05-10T00:22:33.452130355Z" level=info msg="TearDown network for sandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\" successfully" May 10 00:22:33.458219 containerd[1476]: time="2025-05-10T00:22:33.458144095Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:22:33.458219 containerd[1476]: time="2025-05-10T00:22:33.458217015Z" level=info msg="RemovePodSandbox \"4cc2a9cb1dba166ab147b2a9a2f444c012c9571bd426440abbeef8458e6f9041\" returns successfully" May 10 00:22:33.459273 containerd[1476]: time="2025-05-10T00:22:33.459133298Z" level=info msg="StopPodSandbox for \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\"" May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.535 [WARNING][5846] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e1040029-c5db-4540-b478-e7a12da2dd29", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a", Pod:"coredns-7db6d8ff4d-ffbwh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0deb962a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.535 [INFO][5846] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.536 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" iface="eth0" netns="" May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.536 [INFO][5846] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.536 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.559 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.559 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.559 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.572 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.572 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.575 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:33.579046 containerd[1476]: 2025-05-10 00:22:33.576 [INFO][5846] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:33.579046 containerd[1476]: time="2025-05-10T00:22:33.578663810Z" level=info msg="TearDown network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\" successfully" May 10 00:22:33.579046 containerd[1476]: time="2025-05-10T00:22:33.578689210Z" level=info msg="StopPodSandbox for \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\" returns successfully" May 10 00:22:33.580881 containerd[1476]: time="2025-05-10T00:22:33.580517096Z" level=info msg="RemovePodSandbox for \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\"" May 10 00:22:33.580881 containerd[1476]: time="2025-05-10T00:22:33.580554136Z" level=info msg="Forcibly stopping sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\"" May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.648 [WARNING][5875] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e1040029-c5db-4540-b478-e7a12da2dd29", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"be7a3896402b8514eb9baf82ab1e587a2415024a5feebb67989f753356572a6a", Pod:"coredns-7db6d8ff4d-ffbwh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif0deb962a7e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.648 [INFO][5875] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.648 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" iface="eth0" netns="" May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.648 [INFO][5875] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.648 [INFO][5875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.671 [INFO][5901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.671 [INFO][5901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.671 [INFO][5901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.684 [WARNING][5901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.684 [INFO][5901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" HandleID="k8s-pod-network.94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--ffbwh-eth0" May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.686 [INFO][5901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:33.689786 containerd[1476]: 2025-05-10 00:22:33.688 [INFO][5875] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b" May 10 00:22:33.689786 containerd[1476]: time="2025-05-10T00:22:33.689617893Z" level=info msg="TearDown network for sandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\" successfully" May 10 00:22:33.695176 containerd[1476]: time="2025-05-10T00:22:33.695125031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:22:33.695400 containerd[1476]: time="2025-05-10T00:22:33.695206071Z" level=info msg="RemovePodSandbox \"94b91fc300c284f5aafc44d3102346dc047c22384dfa96f0d98a0eac1eb2809b\" returns successfully" May 10 00:22:33.696563 containerd[1476]: time="2025-05-10T00:22:33.696511275Z" level=info msg="StopPodSandbox for \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\"" May 10 00:22:33.722966 sshd[5777]: Connection closed by authenticating user root 60.164.133.37 port 47904 [preauth] May 10 00:22:33.734103 systemd[1]: sshd@126-91.107.204.139:22-60.164.133.37:47904.service: Deactivated successfully. May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.751 [WARNING][5920] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ea774e10-d5e8-49b4-87ba-cd65e20304d9", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7", Pod:"coredns-7db6d8ff4d-h4s4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3db4b576531", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.751 [INFO][5920] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.751 [INFO][5920] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" iface="eth0" netns="" May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.751 [INFO][5920] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.751 [INFO][5920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.772 [INFO][5929] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.772 [INFO][5929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.772 [INFO][5929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.784 [WARNING][5929] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.784 [INFO][5929] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.786 [INFO][5929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:33.790374 containerd[1476]: 2025-05-10 00:22:33.789 [INFO][5920] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:33.790374 containerd[1476]: time="2025-05-10T00:22:33.790371542Z" level=info msg="TearDown network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\" successfully" May 10 00:22:33.791025 containerd[1476]: time="2025-05-10T00:22:33.790396143Z" level=info msg="StopPodSandbox for \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\" returns successfully" May 10 00:22:33.791025 containerd[1476]: time="2025-05-10T00:22:33.790974384Z" level=info msg="RemovePodSandbox for \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\"" May 10 00:22:33.791025 containerd[1476]: time="2025-05-10T00:22:33.791004585Z" level=info msg="Forcibly stopping sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\"" May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.859 [WARNING][5947] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ea774e10-d5e8-49b4-87ba-cd65e20304d9", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"448d658a96f7b8d9cec54a6ebf90099dff909d2d11bb547eaa2315c1e080a2a7", Pod:"coredns-7db6d8ff4d-h4s4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3db4b576531", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.859 [INFO][5947] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.859 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" iface="eth0" netns="" May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.859 [INFO][5947] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.859 [INFO][5947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.887 [INFO][5955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.887 [INFO][5955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.887 [INFO][5955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.898 [WARNING][5955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.898 [INFO][5955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" HandleID="k8s-pod-network.867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" Workload="ci--4081--3--3--n--2389c948d4-k8s-coredns--7db6d8ff4d--h4s4c-eth0" May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.901 [INFO][5955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:33.908082 containerd[1476]: 2025-05-10 00:22:33.904 [INFO][5947] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a" May 10 00:22:33.908082 containerd[1476]: time="2025-05-10T00:22:33.908030048Z" level=info msg="TearDown network for sandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\" successfully" May 10 00:22:33.921900 containerd[1476]: time="2025-05-10T00:22:33.921835573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:22:33.922217 containerd[1476]: time="2025-05-10T00:22:33.921962213Z" level=info msg="RemovePodSandbox \"867405b82ce545d007cbbb35a0bbbdc2d615f2871eb4b227a244c19890a25f9a\" returns successfully" May 10 00:22:33.924321 containerd[1476]: time="2025-05-10T00:22:33.922892696Z" level=info msg="StopPodSandbox for \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\"" May 10 00:22:33.924129 systemd[1]: Started sshd@127-91.107.204.139:22-60.164.133.37:49224.service - OpenSSH per-connection server daemon (60.164.133.37:49224). May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:33.978 [WARNING][5975] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fe5f5d0-455a-4b01-9791-41d2483185a9", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d", Pod:"csi-node-driver-55f69", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2ba0503a99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:33.978 [INFO][5975] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:33.978 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" iface="eth0" netns="" May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:33.978 [INFO][5975] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:33.978 [INFO][5975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:34.004 [INFO][5983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:34.004 [INFO][5983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:34.004 [INFO][5983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:34.015 [WARNING][5983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:34.015 [INFO][5983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:34.017 [INFO][5983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:34.020747 containerd[1476]: 2025-05-10 00:22:34.018 [INFO][5975] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:34.022220 containerd[1476]: time="2025-05-10T00:22:34.021374538Z" level=info msg="TearDown network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\" successfully" May 10 00:22:34.022220 containerd[1476]: time="2025-05-10T00:22:34.021406618Z" level=info msg="StopPodSandbox for \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\" returns successfully" May 10 00:22:34.022220 containerd[1476]: time="2025-05-10T00:22:34.021943060Z" level=info msg="RemovePodSandbox for \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\"" May 10 00:22:34.022220 containerd[1476]: time="2025-05-10T00:22:34.021974660Z" level=info msg="Forcibly stopping sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\"" May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.068 [WARNING][6001] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fe5f5d0-455a-4b01-9791-41d2483185a9", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"db632ec328e8d57fe39928d183c20d654f44087c434cad631a752b0e8f15fa6d", Pod:"csi-node-driver-55f69", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2ba0503a99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.068 [INFO][6001] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.068 [INFO][6001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" iface="eth0" netns="" May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.068 [INFO][6001] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.068 [INFO][6001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.097 [INFO][6008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.097 [INFO][6008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.097 [INFO][6008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.113 [WARNING][6008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.113 [INFO][6008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" HandleID="k8s-pod-network.0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" Workload="ci--4081--3--3--n--2389c948d4-k8s-csi--node--driver--55f69-eth0" May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.116 [INFO][6008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:34.121224 containerd[1476]: 2025-05-10 00:22:34.118 [INFO][6001] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422" May 10 00:22:34.121224 containerd[1476]: time="2025-05-10T00:22:34.120629739Z" level=info msg="TearDown network for sandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\" successfully" May 10 00:22:34.129391 containerd[1476]: time="2025-05-10T00:22:34.129256447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:22:34.129391 containerd[1476]: time="2025-05-10T00:22:34.129366647Z" level=info msg="RemovePodSandbox \"0f60aa588e08b6e7e646ace3470b3d66acead500b2ba9893b462214dc5b32422\" returns successfully" May 10 00:22:34.129391 containerd[1476]: time="2025-05-10T00:22:34.129915409Z" level=info msg="StopPodSandbox for \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\"" May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.177 [WARNING][6026] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0", GenerateName:"calico-apiserver-5ff6889754-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b78a76-4b54-49d0-9136-33da1a2535fc", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff6889754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e", Pod:"calico-apiserver-5ff6889754-226jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb0a50455a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.177 [INFO][6026] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.177 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" iface="eth0" netns="" May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.177 [INFO][6026] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.177 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.200 [INFO][6033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.201 [INFO][6033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.201 [INFO][6033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.211 [WARNING][6033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.211 [INFO][6033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.213 [INFO][6033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:34.216135 containerd[1476]: 2025-05-10 00:22:34.214 [INFO][6026] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:34.216135 containerd[1476]: time="2025-05-10T00:22:34.216108607Z" level=info msg="TearDown network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\" successfully" May 10 00:22:34.216630 containerd[1476]: time="2025-05-10T00:22:34.216146968Z" level=info msg="StopPodSandbox for \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\" returns successfully" May 10 00:22:34.218753 containerd[1476]: time="2025-05-10T00:22:34.218684696Z" level=info msg="RemovePodSandbox for \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\"" May 10 00:22:34.218864 containerd[1476]: time="2025-05-10T00:22:34.218765616Z" level=info msg="Forcibly stopping sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\"" May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.272 [WARNING][6051] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0", GenerateName:"calico-apiserver-5ff6889754-", Namespace:"calico-apiserver", SelfLink:"", UID:"42b78a76-4b54-49d0-9136-33da1a2535fc", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ff6889754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"480466ada0b79ad9e7f3763ec46d397e7b3b98f0aef93213955b0fd5498bf14e", Pod:"calico-apiserver-5ff6889754-226jl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibb0a50455a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.272 [INFO][6051] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.273 [INFO][6051] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" iface="eth0" netns="" May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.273 [INFO][6051] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.273 [INFO][6051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.299 [INFO][6059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.300 [INFO][6059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.300 [INFO][6059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.309 [WARNING][6059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.310 [INFO][6059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" HandleID="k8s-pod-network.070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--apiserver--5ff6889754--226jl-eth0" May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.313 [INFO][6059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:34.318488 containerd[1476]: 2025-05-10 00:22:34.316 [INFO][6051] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0" May 10 00:22:34.319922 containerd[1476]: time="2025-05-10T00:22:34.319204621Z" level=info msg="TearDown network for sandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\" successfully" May 10 00:22:34.324520 containerd[1476]: time="2025-05-10T00:22:34.324308877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:22:34.324520 containerd[1476]: time="2025-05-10T00:22:34.324388078Z" level=info msg="RemovePodSandbox \"070365df59852ff46a5dbb62bcd9b75b7dc6275c5d6955d4951f0546ae90b6c0\" returns successfully" May 10 00:22:34.325548 containerd[1476]: time="2025-05-10T00:22:34.325134440Z" level=info msg="StopPodSandbox for \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\"" May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.369 [WARNING][6077] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0", GenerateName:"calico-kube-controllers-84fbc7f9c5-", Namespace:"calico-system", SelfLink:"", UID:"09b51544-4344-4b33-9ab6-929ff2781faf", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbc7f9c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7", Pod:"calico-kube-controllers-84fbc7f9c5-z5595", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali847b68e0340", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.370 [INFO][6077] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.370 [INFO][6077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" iface="eth0" netns="" May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.370 [INFO][6077] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.370 [INFO][6077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.394 [INFO][6084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.394 [INFO][6084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.394 [INFO][6084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.404 [WARNING][6084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.404 [INFO][6084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.406 [INFO][6084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:34.408671 containerd[1476]: 2025-05-10 00:22:34.407 [INFO][6077] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:34.410241 containerd[1476]: time="2025-05-10T00:22:34.408955911Z" level=info msg="TearDown network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\" successfully" May 10 00:22:34.410241 containerd[1476]: time="2025-05-10T00:22:34.408985111Z" level=info msg="StopPodSandbox for \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\" returns successfully" May 10 00:22:34.410241 containerd[1476]: time="2025-05-10T00:22:34.409949914Z" level=info msg="RemovePodSandbox for \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\"" May 10 00:22:34.410241 containerd[1476]: time="2025-05-10T00:22:34.409981714Z" level=info msg="Forcibly stopping sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\"" May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.456 [WARNING][6102] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0", GenerateName:"calico-kube-controllers-84fbc7f9c5-", Namespace:"calico-system", SelfLink:"", UID:"09b51544-4344-4b33-9ab6-929ff2781faf", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 10, 0, 21, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbc7f9c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-2389c948d4", ContainerID:"ec02b4778ee6e22ffd6d241e43a45b3ccbc6346d9a9788097f1c036b65ae3be7", Pod:"calico-kube-controllers-84fbc7f9c5-z5595", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali847b68e0340", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.457 [INFO][6102] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.457 [INFO][6102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" iface="eth0" netns="" May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.457 [INFO][6102] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.457 [INFO][6102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.491 [INFO][6110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.491 [INFO][6110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.491 [INFO][6110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.500 [WARNING][6110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.500 [INFO][6110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" HandleID="k8s-pod-network.344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" Workload="ci--4081--3--3--n--2389c948d4-k8s-calico--kube--controllers--84fbc7f9c5--z5595-eth0" May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.503 [INFO][6110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 10 00:22:34.508424 containerd[1476]: 2025-05-10 00:22:34.505 [INFO][6102] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837" May 10 00:22:34.508424 containerd[1476]: time="2025-05-10T00:22:34.506941668Z" level=info msg="TearDown network for sandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\" successfully" May 10 00:22:34.513729 containerd[1476]: time="2025-05-10T00:22:34.513673730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:22:34.513948 containerd[1476]: time="2025-05-10T00:22:34.513928291Z" level=info msg="RemovePodSandbox \"344692757059814ed5e80d5fd6bf49b5b69325499aef977bf9935fe334c9a837\" returns successfully" May 10 00:22:34.871372 sshd[5962]: Connection closed by authenticating user root 60.164.133.37 port 49224 [preauth] May 10 00:22:34.875564 systemd[1]: sshd@127-91.107.204.139:22-60.164.133.37:49224.service: Deactivated successfully. May 10 00:22:35.072648 systemd[1]: Started sshd@128-91.107.204.139:22-60.164.133.37:50514.service - OpenSSH per-connection server daemon (60.164.133.37:50514). May 10 00:22:35.887872 systemd[1]: Started sshd@129-91.107.204.139:22-103.232.80.5:35986.service - OpenSSH per-connection server daemon (103.232.80.5:35986). May 10 00:22:36.019378 sshd[6119]: Connection closed by authenticating user root 60.164.133.37 port 50514 [preauth] May 10 00:22:36.022864 systemd[1]: sshd@128-91.107.204.139:22-60.164.133.37:50514.service: Deactivated successfully. May 10 00:22:36.223765 systemd[1]: Started sshd@130-91.107.204.139:22-60.164.133.37:51992.service - OpenSSH per-connection server daemon (60.164.133.37:51992). May 10 00:22:36.339105 sshd[6142]: Connection closed by 103.232.80.5 port 35986 [preauth] May 10 00:22:36.341732 systemd[1]: sshd@129-91.107.204.139:22-103.232.80.5:35986.service: Deactivated successfully. May 10 00:22:37.173772 sshd[6147]: Connection closed by authenticating user root 60.164.133.37 port 51992 [preauth] May 10 00:22:37.177644 systemd[1]: sshd@130-91.107.204.139:22-60.164.133.37:51992.service: Deactivated successfully. May 10 00:22:37.376736 systemd[1]: Started sshd@131-91.107.204.139:22-60.164.133.37:53414.service - OpenSSH per-connection server daemon (60.164.133.37:53414). May 10 00:22:38.343911 sshd[6154]: Connection closed by authenticating user root 60.164.133.37 port 53414 [preauth] May 10 00:22:38.348122 systemd[1]: sshd@131-91.107.204.139:22-60.164.133.37:53414.service: Deactivated successfully. May 10 00:22:38.545358 systemd[1]: Started sshd@132-91.107.204.139:22-60.164.133.37:54880.service - OpenSSH per-connection server daemon (60.164.133.37:54880). May 10 00:22:39.490110 sshd[6159]: Connection closed by authenticating user root 60.164.133.37 port 54880 [preauth] May 10 00:22:39.494759 systemd[1]: sshd@132-91.107.204.139:22-60.164.133.37:54880.service: Deactivated successfully. May 10 00:22:39.696598 systemd[1]: Started sshd@133-91.107.204.139:22-60.164.133.37:56248.service - OpenSSH per-connection server daemon (60.164.133.37:56248). May 10 00:22:40.669346 sshd[6164]: Connection closed by authenticating user root 60.164.133.37 port 56248 [preauth] May 10 00:22:40.672830 systemd[1]: sshd@133-91.107.204.139:22-60.164.133.37:56248.service: Deactivated successfully. May 10 00:22:40.867724 systemd[1]: Started sshd@134-91.107.204.139:22-60.164.133.37:57652.service - OpenSSH per-connection server daemon (60.164.133.37:57652). May 10 00:22:41.826189 sshd[6169]: Connection closed by authenticating user root 60.164.133.37 port 57652 [preauth] May 10 00:22:41.830065 systemd[1]: sshd@134-91.107.204.139:22-60.164.133.37:57652.service: Deactivated successfully. May 10 00:22:42.028730 systemd[1]: Started sshd@135-91.107.204.139:22-60.164.133.37:58994.service - OpenSSH per-connection server daemon (60.164.133.37:58994). May 10 00:22:42.978417 sshd[6174]: Connection closed by authenticating user root 60.164.133.37 port 58994 [preauth] May 10 00:22:42.979588 systemd[1]: sshd@135-91.107.204.139:22-60.164.133.37:58994.service: Deactivated successfully. May 10 00:22:43.181637 systemd[1]: Started sshd@136-91.107.204.139:22-60.164.133.37:60272.service - OpenSSH per-connection server daemon (60.164.133.37:60272). May 10 00:22:44.133541 sshd[6187]: Connection closed by authenticating user root 60.164.133.37 port 60272 [preauth] May 10 00:22:44.139948 systemd[1]: sshd@136-91.107.204.139:22-60.164.133.37:60272.service: Deactivated successfully. May 10 00:22:44.336618 systemd[1]: Started sshd@137-91.107.204.139:22-60.164.133.37:33610.service - OpenSSH per-connection server daemon (60.164.133.37:33610). May 10 00:22:45.291211 sshd[6192]: Connection closed by authenticating user root 60.164.133.37 port 33610 [preauth] May 10 00:22:45.294351 systemd[1]: sshd@137-91.107.204.139:22-60.164.133.37:33610.service: Deactivated successfully. May 10 00:22:45.502713 systemd[1]: Started sshd@138-91.107.204.139:22-60.164.133.37:34956.service - OpenSSH per-connection server daemon (60.164.133.37:34956). May 10 00:22:46.463970 sshd[6197]: Connection closed by authenticating user root 60.164.133.37 port 34956 [preauth] May 10 00:22:46.466768 systemd[1]: sshd@138-91.107.204.139:22-60.164.133.37:34956.service: Deactivated successfully. May 10 00:22:46.678810 systemd[1]: Started sshd@139-91.107.204.139:22-60.164.133.37:36192.service - OpenSSH per-connection server daemon (60.164.133.37:36192). May 10 00:22:47.644904 sshd[6202]: Connection closed by authenticating user root 60.164.133.37 port 36192 [preauth] May 10 00:22:47.648070 systemd[1]: sshd@139-91.107.204.139:22-60.164.133.37:36192.service: Deactivated successfully. May 10 00:22:47.845680 systemd[1]: Started sshd@140-91.107.204.139:22-60.164.133.37:37708.service - OpenSSH per-connection server daemon (60.164.133.37:37708). May 10 00:22:48.791364 sshd[6207]: Connection closed by authenticating user root 60.164.133.37 port 37708 [preauth] May 10 00:22:48.795491 systemd[1]: sshd@140-91.107.204.139:22-60.164.133.37:37708.service: Deactivated successfully. May 10 00:22:49.002172 systemd[1]: Started sshd@141-91.107.204.139:22-60.164.133.37:39154.service - OpenSSH per-connection server daemon (60.164.133.37:39154). May 10 00:22:49.957240 sshd[6214]: Connection closed by authenticating user root 60.164.133.37 port 39154 [preauth] May 10 00:22:49.959731 systemd[1]: sshd@141-91.107.204.139:22-60.164.133.37:39154.service: Deactivated successfully. May 10 00:22:50.165739 systemd[1]: Started sshd@142-91.107.204.139:22-60.164.133.37:40470.service - OpenSSH per-connection server daemon (60.164.133.37:40470). May 10 00:22:51.128881 sshd[6220]: Connection closed by authenticating user root 60.164.133.37 port 40470 [preauth] May 10 00:22:51.132815 systemd[1]: sshd@142-91.107.204.139:22-60.164.133.37:40470.service: Deactivated successfully. May 10 00:22:51.333033 systemd[1]: Started sshd@143-91.107.204.139:22-60.164.133.37:41918.service - OpenSSH per-connection server daemon (60.164.133.37:41918). May 10 00:22:52.282359 sshd[6225]: Connection closed by authenticating user root 60.164.133.37 port 41918 [preauth] May 10 00:22:52.285692 systemd[1]: sshd@143-91.107.204.139:22-60.164.133.37:41918.service: Deactivated successfully. May 10 00:22:52.493453 systemd[1]: Started sshd@144-91.107.204.139:22-60.164.133.37:43212.service - OpenSSH per-connection server daemon (60.164.133.37:43212). May 10 00:22:53.437604 sshd[6230]: Connection closed by authenticating user root 60.164.133.37 port 43212 [preauth] May 10 00:22:53.441544 systemd[1]: sshd@144-91.107.204.139:22-60.164.133.37:43212.service: Deactivated successfully. May 10 00:22:53.646000 systemd[1]: Started sshd@145-91.107.204.139:22-60.164.133.37:44666.service - OpenSSH per-connection server daemon (60.164.133.37:44666). May 10 00:22:54.606661 sshd[6235]: Connection closed by authenticating user root 60.164.133.37 port 44666 [preauth] May 10 00:22:54.609567 systemd[1]: sshd@145-91.107.204.139:22-60.164.133.37:44666.service: Deactivated successfully. May 10 00:22:54.913663 systemd[1]: Started sshd@146-91.107.204.139:22-60.164.133.37:46032.service - OpenSSH per-connection server daemon (60.164.133.37:46032). May 10 00:22:56.117478 sshd[6240]: Connection closed by authenticating user root 60.164.133.37 port 46032 [preauth] May 10 00:22:56.121462 systemd[1]: sshd@146-91.107.204.139:22-60.164.133.37:46032.service: Deactivated successfully. May 10 00:22:56.271700 systemd[1]: Started sshd@147-91.107.204.139:22-60.164.133.37:47876.service - OpenSSH per-connection server daemon (60.164.133.37:47876). May 10 00:22:57.223657 sshd[6245]: Connection closed by authenticating user root 60.164.133.37 port 47876 [preauth] May 10 00:22:57.227238 systemd[1]: sshd@147-91.107.204.139:22-60.164.133.37:47876.service: Deactivated successfully. May 10 00:22:57.430707 systemd[1]: Started sshd@148-91.107.204.139:22-60.164.133.37:49294.service - OpenSSH per-connection server daemon (60.164.133.37:49294). May 10 00:22:58.382562 sshd[6250]: Connection closed by authenticating user root 60.164.133.37 port 49294 [preauth] May 10 00:22:58.385995 systemd[1]: sshd@148-91.107.204.139:22-60.164.133.37:49294.service: Deactivated successfully. May 10 00:22:58.697613 systemd[1]: Started sshd@149-91.107.204.139:22-60.164.133.37:50626.service - OpenSSH per-connection server daemon (60.164.133.37:50626). May 10 00:22:59.929677 sshd[6276]: Connection closed by authenticating user root 60.164.133.37 port 50626 [preauth] May 10 00:22:59.933753 systemd[1]: sshd@149-91.107.204.139:22-60.164.133.37:50626.service: Deactivated successfully. May 10 00:23:00.182735 systemd[1]: Started sshd@150-91.107.204.139:22-60.164.133.37:52444.service - OpenSSH per-connection server daemon (60.164.133.37:52444). May 10 00:23:01.387861 sshd[6281]: Connection closed by authenticating user root 60.164.133.37 port 52444 [preauth] May 10 00:23:01.392174 systemd[1]: sshd@150-91.107.204.139:22-60.164.133.37:52444.service: Deactivated successfully. May 10 00:23:01.539784 systemd[1]: Started sshd@151-91.107.204.139:22-60.164.133.37:54300.service - OpenSSH per-connection server daemon (60.164.133.37:54300). May 10 00:23:02.481040 sshd[6286]: Connection closed by authenticating user root 60.164.133.37 port 54300 [preauth] May 10 00:23:02.486034 systemd[1]: sshd@151-91.107.204.139:22-60.164.133.37:54300.service: Deactivated successfully. May 10 00:23:02.683544 systemd[1]: Started sshd@152-91.107.204.139:22-60.164.133.37:55674.service - OpenSSH per-connection server daemon (60.164.133.37:55674). May 10 00:23:03.605074 systemd[1]: run-containerd-runc-k8s.io-f759f747678e0cdb718c082b1a5fa944b0e16bbbec998edf64fe6be9f82f5c01-runc.TzkQ3z.mount: Deactivated successfully. May 10 00:23:03.621321 sshd[6299]: Connection closed by authenticating user root 60.164.133.37 port 55674 [preauth] May 10 00:23:03.623918 systemd[1]: sshd@152-91.107.204.139:22-60.164.133.37:55674.service: Deactivated successfully. May 10 00:23:03.829739 systemd[1]: Started sshd@153-91.107.204.139:22-60.164.133.37:57014.service - OpenSSH per-connection server daemon (60.164.133.37:57014). May 10 00:23:04.786964 sshd[6325]: Connection closed by authenticating user root 60.164.133.37 port 57014 [preauth] May 10 00:23:04.790429 systemd[1]: sshd@153-91.107.204.139:22-60.164.133.37:57014.service: Deactivated successfully. May 10 00:23:04.992565 systemd[1]: Started sshd@154-91.107.204.139:22-60.164.133.37:58406.service - OpenSSH per-connection server daemon (60.164.133.37:58406). May 10 00:23:05.267423 systemd[1]: run-containerd-runc-k8s.io-d8194b1f7c7419c5f1292efd6309c32f21b151a1e1dc88c7bd11ec7c8456f0dc-runc.mpDXDB.mount: Deactivated successfully. May 10 00:23:05.941919 sshd[6330]: Connection closed by authenticating user root 60.164.133.37 port 58406 [preauth] May 10 00:23:05.945126 systemd[1]: sshd@154-91.107.204.139:22-60.164.133.37:58406.service: Deactivated successfully. May 10 00:23:06.148886 systemd[1]: Started sshd@155-91.107.204.139:22-60.164.133.37:59802.service - OpenSSH per-connection server daemon (60.164.133.37:59802). May 10 00:23:07.116138 sshd[6355]: Connection closed by authenticating user root 60.164.133.37 port 59802 [preauth] May 10 00:23:07.121569 systemd[1]: sshd@155-91.107.204.139:22-60.164.133.37:59802.service: Deactivated successfully. May 10 00:23:07.320699 systemd[1]: Started sshd@156-91.107.204.139:22-60.164.133.37:32940.service - OpenSSH per-connection server daemon (60.164.133.37:32940). May 10 00:23:08.259587 sshd[6360]: Connection closed by authenticating user root 60.164.133.37 port 32940 [preauth] May 10 00:23:08.262681 systemd[1]: sshd@156-91.107.204.139:22-60.164.133.37:32940.service: Deactivated successfully. May 10 00:23:08.463110 systemd[1]: Started sshd@157-91.107.204.139:22-60.164.133.37:34456.service - OpenSSH per-connection server daemon (60.164.133.37:34456). May 10 00:23:09.396376 sshd[6365]: Connection closed by authenticating user root 60.164.133.37 port 34456 [preauth] May 10 00:23:09.399015 systemd[1]: sshd@157-91.107.204.139:22-60.164.133.37:34456.service: Deactivated successfully. May 10 00:23:09.602713 systemd[1]: Started sshd@158-91.107.204.139:22-60.164.133.37:35780.service - OpenSSH per-connection server daemon (60.164.133.37:35780). May 10 00:23:10.552826 sshd[6370]: Connection closed by authenticating user root 60.164.133.37 port 35780 [preauth] May 10 00:23:10.556028 systemd[1]: sshd@158-91.107.204.139:22-60.164.133.37:35780.service: Deactivated successfully. May 10 00:23:10.758734 systemd[1]: Started sshd@159-91.107.204.139:22-60.164.133.37:37188.service - OpenSSH per-connection server daemon (60.164.133.37:37188). May 10 00:23:11.708473 sshd[6375]: Connection closed by authenticating user root 60.164.133.37 port 37188 [preauth] May 10 00:23:11.712609 systemd[1]: sshd@159-91.107.204.139:22-60.164.133.37:37188.service: Deactivated successfully. May 10 00:23:11.917807 systemd[1]: Started sshd@160-91.107.204.139:22-60.164.133.37:38640.service - OpenSSH per-connection server daemon (60.164.133.37:38640). May 10 00:23:12.863760 sshd[6380]: Connection closed by authenticating user root 60.164.133.37 port 38640 [preauth] May 10 00:23:12.865826 systemd[1]: sshd@160-91.107.204.139:22-60.164.133.37:38640.service: Deactivated successfully. May 10 00:23:13.067781 systemd[1]: Started sshd@161-91.107.204.139:22-60.164.133.37:39988.service - OpenSSH per-connection server daemon (60.164.133.37:39988). May 10 00:23:14.016368 sshd[6385]: Connection closed by authenticating user root 60.164.133.37 port 39988 [preauth] May 10 00:23:14.019745 systemd[1]: sshd@161-91.107.204.139:22-60.164.133.37:39988.service: Deactivated successfully. May 10 00:23:14.214704 systemd[1]: Started sshd@162-91.107.204.139:22-60.164.133.37:41312.service - OpenSSH per-connection server daemon (60.164.133.37:41312). May 10 00:23:15.155514 sshd[6390]: Connection closed by authenticating user root 60.164.133.37 port 41312 [preauth] May 10 00:23:15.159569 systemd[1]: sshd@162-91.107.204.139:22-60.164.133.37:41312.service: Deactivated successfully. May 10 00:23:15.355545 systemd[1]: Started sshd@163-91.107.204.139:22-60.164.133.37:42766.service - OpenSSH per-connection server daemon (60.164.133.37:42766). May 10 00:23:16.322140 sshd[6395]: Connection closed by authenticating user root 60.164.133.37 port 42766 [preauth] May 10 00:23:16.325184 systemd[1]: sshd@163-91.107.204.139:22-60.164.133.37:42766.service: Deactivated successfully. May 10 00:23:16.519834 systemd[1]: Started sshd@164-91.107.204.139:22-60.164.133.37:44112.service - OpenSSH per-connection server daemon (60.164.133.37:44112). May 10 00:23:17.474037 sshd[6400]: Connection closed by authenticating user root 60.164.133.37 port 44112 [preauth] May 10 00:23:17.476961 systemd[1]: sshd@164-91.107.204.139:22-60.164.133.37:44112.service: Deactivated successfully. May 10 00:23:17.673130 systemd[1]: Started sshd@165-91.107.204.139:22-60.164.133.37:45480.service - OpenSSH per-connection server daemon (60.164.133.37:45480). May 10 00:23:18.612910 sshd[6405]: Connection closed by authenticating user root 60.164.133.37 port 45480 [preauth] May 10 00:23:18.615726 systemd[1]: sshd@165-91.107.204.139:22-60.164.133.37:45480.service: Deactivated successfully. May 10 00:23:18.810889 systemd[1]: Started sshd@166-91.107.204.139:22-60.164.133.37:46782.service - OpenSSH per-connection server daemon (60.164.133.37:46782). May 10 00:23:19.774424 sshd[6412]: Connection closed by authenticating user root 60.164.133.37 port 46782 [preauth] May 10 00:23:19.778608 systemd[1]: sshd@166-91.107.204.139:22-60.164.133.37:46782.service: Deactivated successfully. May 10 00:23:19.979933 systemd[1]: Started sshd@167-91.107.204.139:22-60.164.133.37:48166.service - OpenSSH per-connection server daemon (60.164.133.37:48166). May 10 00:23:20.915754 sshd[6417]: Connection closed by authenticating user root 60.164.133.37 port 48166 [preauth] May 10 00:23:20.918065 systemd[1]: sshd@167-91.107.204.139:22-60.164.133.37:48166.service: Deactivated successfully. May 10 00:23:21.119076 systemd[1]: Started sshd@168-91.107.204.139:22-60.164.133.37:49628.service - OpenSSH per-connection server daemon (60.164.133.37:49628). May 10 00:23:22.066474 sshd[6422]: Connection closed by authenticating user root 60.164.133.37 port 49628 [preauth] May 10 00:23:22.067901 systemd[1]: sshd@168-91.107.204.139:22-60.164.133.37:49628.service: Deactivated successfully. May 10 00:23:22.269804 systemd[1]: Started sshd@169-91.107.204.139:22-60.164.133.37:51082.service - OpenSSH per-connection server daemon (60.164.133.37:51082). May 10 00:23:23.214021 sshd[6428]: Connection closed by authenticating user root 60.164.133.37 port 51082 [preauth] May 10 00:23:23.218124 systemd[1]: sshd@169-91.107.204.139:22-60.164.133.37:51082.service: Deactivated successfully. May 10 00:23:23.421806 systemd[1]: Started sshd@170-91.107.204.139:22-60.164.133.37:52650.service - OpenSSH per-connection server daemon (60.164.133.37:52650). May 10 00:23:24.363794 sshd[6433]: Connection closed by authenticating user root 60.164.133.37 port 52650 [preauth] May 10 00:23:24.367488 systemd[1]: sshd@170-91.107.204.139:22-60.164.133.37:52650.service: Deactivated successfully. May 10 00:23:24.675742 systemd[1]: Started sshd@171-91.107.204.139:22-60.164.133.37:53962.service - OpenSSH per-connection server daemon (60.164.133.37:53962). May 10 00:23:25.887217 sshd[6438]: Connection closed by authenticating user root 60.164.133.37 port 53962 [preauth] May 10 00:23:25.891991 systemd[1]: sshd@171-91.107.204.139:22-60.164.133.37:53962.service: Deactivated successfully. May 10 00:23:26.040761 systemd[1]: Started sshd@172-91.107.204.139:22-60.164.133.37:55560.service - OpenSSH per-connection server daemon (60.164.133.37:55560). May 10 00:23:26.984462 sshd[6443]: Connection closed by authenticating user root 60.164.133.37 port 55560 [preauth] May 10 00:23:26.987737 systemd[1]: sshd@172-91.107.204.139:22-60.164.133.37:55560.service: Deactivated successfully. May 10 00:23:27.196776 systemd[1]: Started sshd@173-91.107.204.139:22-60.164.133.37:56948.service - OpenSSH per-connection server daemon (60.164.133.37:56948). May 10 00:23:28.153657 sshd[6448]: Connection closed by authenticating user root 60.164.133.37 port 56948 [preauth] May 10 00:23:28.157000 systemd[1]: sshd@173-91.107.204.139:22-60.164.133.37:56948.service: Deactivated successfully. May 10 00:23:28.357688 systemd[1]: Started sshd@174-91.107.204.139:22-60.164.133.37:58344.service - OpenSSH per-connection server daemon (60.164.133.37:58344). May 10 00:23:29.311689 sshd[6453]: Connection closed by authenticating user root 60.164.133.37 port 58344 [preauth] May 10 00:23:29.314889 systemd[1]: sshd@174-91.107.204.139:22-60.164.133.37:58344.service: Deactivated successfully. May 10 00:23:29.515884 systemd[1]: Started sshd@175-91.107.204.139:22-60.164.133.37:59820.service - OpenSSH per-connection server daemon (60.164.133.37:59820). May 10 00:23:30.466720 sshd[6458]: Connection closed by authenticating user root 60.164.133.37 port 59820 [preauth] May 10 00:23:30.471232 systemd[1]: sshd@175-91.107.204.139:22-60.164.133.37:59820.service: Deactivated successfully. May 10 00:23:30.665628 systemd[1]: Started sshd@176-91.107.204.139:22-60.164.133.37:33058.service - OpenSSH per-connection server daemon (60.164.133.37:33058). May 10 00:23:31.600515 sshd[6463]: Connection closed by authenticating user root 60.164.133.37 port 33058 [preauth] May 10 00:23:31.603947 systemd[1]: sshd@176-91.107.204.139:22-60.164.133.37:33058.service: Deactivated successfully. May 10 00:23:31.811210 systemd[1]: Started sshd@177-91.107.204.139:22-60.164.133.37:34382.service - OpenSSH per-connection server daemon (60.164.133.37:34382). May 10 00:23:32.782656 sshd[6468]: Connection closed by authenticating user root 60.164.133.37 port 34382 [preauth] May 10 00:23:32.785647 systemd[1]: sshd@177-91.107.204.139:22-60.164.133.37:34382.service: Deactivated successfully. May 10 00:23:32.982722 systemd[1]: Started sshd@178-91.107.204.139:22-60.164.133.37:35680.service - OpenSSH per-connection server daemon (60.164.133.37:35680). May 10 00:23:33.930878 sshd[6473]: Connection closed by authenticating user root 60.164.133.37 port 35680 [preauth] May 10 00:23:33.933472 systemd[1]: sshd@178-91.107.204.139:22-60.164.133.37:35680.service: Deactivated successfully. May 10 00:23:34.140679 systemd[1]: Started sshd@179-91.107.204.139:22-60.164.133.37:37224.service - OpenSSH per-connection server daemon (60.164.133.37:37224). May 10 00:23:35.097559 sshd[6502]: Connection closed by authenticating user root 60.164.133.37 port 37224 [preauth] May 10 00:23:35.101721 systemd[1]: sshd@179-91.107.204.139:22-60.164.133.37:37224.service: Deactivated successfully. May 10 00:23:35.304689 systemd[1]: Started sshd@180-91.107.204.139:22-60.164.133.37:38818.service - OpenSSH per-connection server daemon (60.164.133.37:38818). May 10 00:23:36.254717 sshd[6526]: Connection closed by authenticating user root 60.164.133.37 port 38818 [preauth] May 10 00:23:36.259631 systemd[1]: sshd@180-91.107.204.139:22-60.164.133.37:38818.service: Deactivated successfully. May 10 00:23:36.463651 systemd[1]: Started sshd@181-91.107.204.139:22-60.164.133.37:40082.service - OpenSSH per-connection server daemon (60.164.133.37:40082). May 10 00:23:37.428139 sshd[6531]: Connection closed by authenticating user root 60.164.133.37 port 40082 [preauth] May 10 00:23:37.431930 systemd[1]: sshd@181-91.107.204.139:22-60.164.133.37:40082.service: Deactivated successfully. May 10 00:23:37.629757 systemd[1]: Started sshd@182-91.107.204.139:22-60.164.133.37:41354.service - OpenSSH per-connection server daemon (60.164.133.37:41354). May 10 00:23:38.572638 sshd[6536]: Connection closed by authenticating user root 60.164.133.37 port 41354 [preauth] May 10 00:23:38.576680 systemd[1]: sshd@182-91.107.204.139:22-60.164.133.37:41354.service: Deactivated successfully. May 10 00:23:38.770842 systemd[1]: Started sshd@183-91.107.204.139:22-60.164.133.37:42704.service - OpenSSH per-connection server daemon (60.164.133.37:42704). May 10 00:23:39.710584 sshd[6541]: Connection closed by authenticating user root 60.164.133.37 port 42704 [preauth] May 10 00:23:39.713883 systemd[1]: sshd@183-91.107.204.139:22-60.164.133.37:42704.service: Deactivated successfully. May 10 00:23:39.913634 systemd[1]: Started sshd@184-91.107.204.139:22-60.164.133.37:44070.service - OpenSSH per-connection server daemon (60.164.133.37:44070). May 10 00:23:40.868576 sshd[6546]: Connection closed by authenticating user root 60.164.133.37 port 44070 [preauth] May 10 00:23:40.871827 systemd[1]: sshd@184-91.107.204.139:22-60.164.133.37:44070.service: Deactivated successfully. May 10 00:23:41.081707 systemd[1]: Started sshd@185-91.107.204.139:22-60.164.133.37:45512.service - OpenSSH per-connection server daemon (60.164.133.37:45512). May 10 00:23:42.039878 sshd[6551]: Connection closed by authenticating user root 60.164.133.37 port 45512 [preauth] May 10 00:23:42.042582 systemd[1]: sshd@185-91.107.204.139:22-60.164.133.37:45512.service: Deactivated successfully. May 10 00:23:42.253696 systemd[1]: Started sshd@186-91.107.204.139:22-60.164.133.37:46984.service - OpenSSH per-connection server daemon (60.164.133.37:46984). May 10 00:23:43.210250 sshd[6556]: Connection closed by authenticating user root 60.164.133.37 port 46984 [preauth] May 10 00:23:43.217068 systemd[1]: sshd@186-91.107.204.139:22-60.164.133.37:46984.service: Deactivated successfully. May 10 00:23:43.416632 systemd[1]: Started sshd@187-91.107.204.139:22-60.164.133.37:48456.service - OpenSSH per-connection server daemon (60.164.133.37:48456). May 10 00:23:44.365358 sshd[6569]: Connection closed by authenticating user root 60.164.133.37 port 48456 [preauth] May 10 00:23:44.368916 systemd[1]: sshd@187-91.107.204.139:22-60.164.133.37:48456.service: Deactivated successfully. May 10 00:23:44.568803 systemd[1]: Started sshd@188-91.107.204.139:22-60.164.133.37:49780.service - OpenSSH per-connection server daemon (60.164.133.37:49780). May 10 00:23:45.501977 sshd[6574]: Connection closed by authenticating user root 60.164.133.37 port 49780 [preauth] May 10 00:23:45.509220 systemd[1]: sshd@188-91.107.204.139:22-60.164.133.37:49780.service: Deactivated successfully. May 10 00:23:45.708761 systemd[1]: Started sshd@189-91.107.204.139:22-60.164.133.37:51148.service - OpenSSH per-connection server daemon (60.164.133.37:51148). May 10 00:23:46.667075 sshd[6579]: Connection closed by authenticating user root 60.164.133.37 port 51148 [preauth] May 10 00:23:46.669245 systemd[1]: sshd@189-91.107.204.139:22-60.164.133.37:51148.service: Deactivated successfully. May 10 00:23:46.875587 systemd[1]: Started sshd@190-91.107.204.139:22-60.164.133.37:52638.service - OpenSSH per-connection server daemon (60.164.133.37:52638). May 10 00:23:47.830191 sshd[6584]: Connection closed by authenticating user root 60.164.133.37 port 52638 [preauth] May 10 00:23:47.833217 systemd[1]: sshd@190-91.107.204.139:22-60.164.133.37:52638.service: Deactivated successfully. May 10 00:23:48.036798 systemd[1]: Started sshd@191-91.107.204.139:22-60.164.133.37:54132.service - OpenSSH per-connection server daemon (60.164.133.37:54132). May 10 00:23:48.969138 sshd[6589]: Connection closed by authenticating user root 60.164.133.37 port 54132 [preauth] May 10 00:23:48.971942 systemd[1]: sshd@191-91.107.204.139:22-60.164.133.37:54132.service: Deactivated successfully. May 10 00:23:49.185597 systemd[1]: Started sshd@192-91.107.204.139:22-60.164.133.37:55462.service - OpenSSH per-connection server daemon (60.164.133.37:55462). May 10 00:23:50.141367 sshd[6596]: Connection closed by authenticating user root 60.164.133.37 port 55462 [preauth] May 10 00:23:50.144762 systemd[1]: sshd@192-91.107.204.139:22-60.164.133.37:55462.service: Deactivated successfully. May 10 00:23:50.351790 systemd[1]: Started sshd@193-91.107.204.139:22-60.164.133.37:56862.service - OpenSSH per-connection server daemon (60.164.133.37:56862). May 10 00:23:51.309737 sshd[6601]: Connection closed by authenticating user root 60.164.133.37 port 56862 [preauth] May 10 00:23:51.312807 systemd[1]: sshd@193-91.107.204.139:22-60.164.133.37:56862.service: Deactivated successfully. May 10 00:23:51.533675 systemd[1]: Started sshd@194-91.107.204.139:22-60.164.133.37:58124.service - OpenSSH per-connection server daemon (60.164.133.37:58124). May 10 00:23:52.492992 sshd[6606]: Connection closed by authenticating user root 60.164.133.37 port 58124 [preauth] May 10 00:23:52.496095 systemd[1]: sshd@194-91.107.204.139:22-60.164.133.37:58124.service: Deactivated successfully. May 10 00:23:52.696739 systemd[1]: Started sshd@195-91.107.204.139:22-60.164.133.37:59596.service - OpenSSH per-connection server daemon (60.164.133.37:59596). May 10 00:23:53.637833 sshd[6611]: Connection closed by authenticating user root 60.164.133.37 port 59596 [preauth] May 10 00:23:53.641118 systemd[1]: sshd@195-91.107.204.139:22-60.164.133.37:59596.service: Deactivated successfully. May 10 00:23:53.842004 systemd[1]: Started sshd@196-91.107.204.139:22-60.164.133.37:32872.service - OpenSSH per-connection server daemon (60.164.133.37:32872). May 10 00:23:54.782069 sshd[6616]: Connection closed by authenticating user root 60.164.133.37 port 32872 [preauth] May 10 00:23:54.785772 systemd[1]: sshd@196-91.107.204.139:22-60.164.133.37:32872.service: Deactivated successfully. May 10 00:23:54.990209 systemd[1]: Started sshd@197-91.107.204.139:22-60.164.133.37:34214.service - OpenSSH per-connection server daemon (60.164.133.37:34214). May 10 00:23:55.939177 sshd[6633]: Connection closed by authenticating user root 60.164.133.37 port 34214 [preauth] May 10 00:23:55.941888 systemd[1]: sshd@197-91.107.204.139:22-60.164.133.37:34214.service: Deactivated successfully. May 10 00:23:56.144734 systemd[1]: Started sshd@198-91.107.204.139:22-60.164.133.37:35518.service - OpenSSH per-connection server daemon (60.164.133.37:35518). May 10 00:23:57.094851 sshd[6638]: Connection closed by authenticating user root 60.164.133.37 port 35518 [preauth] May 10 00:23:57.098205 systemd[1]: sshd@198-91.107.204.139:22-60.164.133.37:35518.service: Deactivated successfully. May 10 00:23:57.294874 systemd[1]: Started sshd@199-91.107.204.139:22-60.164.133.37:37134.service - OpenSSH per-connection server daemon (60.164.133.37:37134). May 10 00:23:58.239118 sshd[6648]: Connection closed by authenticating user root 60.164.133.37 port 37134 [preauth] May 10 00:23:58.242853 systemd[1]: sshd@199-91.107.204.139:22-60.164.133.37:37134.service: Deactivated successfully. May 10 00:23:58.447233 systemd[1]: Started sshd@200-91.107.204.139:22-60.164.133.37:38422.service - OpenSSH per-connection server daemon (60.164.133.37:38422). May 10 00:23:59.393328 sshd[6671]: Connection closed by authenticating user root 60.164.133.37 port 38422 [preauth] May 10 00:23:59.397734 systemd[1]: sshd@200-91.107.204.139:22-60.164.133.37:38422.service: Deactivated successfully. May 10 00:23:59.708751 systemd[1]: Started sshd@201-91.107.204.139:22-60.164.133.37:39750.service - OpenSSH per-connection server daemon (60.164.133.37:39750). May 10 00:24:00.931054 sshd[6676]: Connection closed by authenticating user root 60.164.133.37 port 39750 [preauth] May 10 00:24:00.934616 systemd[1]: sshd@201-91.107.204.139:22-60.164.133.37:39750.service: Deactivated successfully. May 10 00:24:01.079932 systemd[1]: Started sshd@202-91.107.204.139:22-60.164.133.37:41618.service - OpenSSH per-connection server daemon (60.164.133.37:41618). May 10 00:24:02.026167 sshd[6681]: Connection closed by authenticating user root 60.164.133.37 port 41618 [preauth] May 10 00:24:02.027956 systemd[1]: sshd@202-91.107.204.139:22-60.164.133.37:41618.service: Deactivated successfully. May 10 00:24:02.236769 systemd[1]: Started sshd@203-91.107.204.139:22-60.164.133.37:43004.service - OpenSSH per-connection server daemon (60.164.133.37:43004). May 10 00:24:03.179513 sshd[6687]: Connection closed by authenticating user root 60.164.133.37 port 43004 [preauth] May 10 00:24:03.183173 systemd[1]: sshd@203-91.107.204.139:22-60.164.133.37:43004.service: Deactivated successfully. May 10 00:24:03.384631 systemd[1]: Started sshd@204-91.107.204.139:22-60.164.133.37:44376.service - OpenSSH per-connection server daemon (60.164.133.37:44376). May 10 00:24:03.605433 systemd[1]: run-containerd-runc-k8s.io-f759f747678e0cdb718c082b1a5fa944b0e16bbbec998edf64fe6be9f82f5c01-runc.1Fhe3q.mount: Deactivated successfully. May 10 00:24:04.328007 sshd[6692]: Connection closed by authenticating user root 60.164.133.37 port 44376 [preauth] May 10 00:24:04.331871 systemd[1]: sshd@204-91.107.204.139:22-60.164.133.37:44376.service: Deactivated successfully. May 10 00:24:04.534758 systemd[1]: Started sshd@205-91.107.204.139:22-60.164.133.37:45600.service - OpenSSH per-connection server daemon (60.164.133.37:45600). May 10 00:24:05.255256 systemd[1]: run-containerd-runc-k8s.io-d8194b1f7c7419c5f1292efd6309c32f21b151a1e1dc88c7bd11ec7c8456f0dc-runc.djMfa6.mount: Deactivated successfully. May 10 00:24:05.484363 sshd[6721]: Connection closed by authenticating user root 60.164.133.37 port 45600 [preauth] May 10 00:24:05.489427 systemd[1]: sshd@205-91.107.204.139:22-60.164.133.37:45600.service: Deactivated successfully. May 10 00:24:05.695998 systemd[1]: Started sshd@206-91.107.204.139:22-60.164.133.37:47084.service - OpenSSH per-connection server daemon (60.164.133.37:47084). May 10 00:24:06.647802 sshd[6745]: Connection closed by authenticating user root 60.164.133.37 port 47084 [preauth] May 10 00:24:06.651836 systemd[1]: sshd@206-91.107.204.139:22-60.164.133.37:47084.service: Deactivated successfully. May 10 00:24:06.846901 systemd[1]: Started sshd@207-91.107.204.139:22-60.164.133.37:48656.service - OpenSSH per-connection server daemon (60.164.133.37:48656). May 10 00:24:07.819152 sshd[6750]: Connection closed by authenticating user root 60.164.133.37 port 48656 [preauth] May 10 00:24:07.822654 systemd[1]: sshd@207-91.107.204.139:22-60.164.133.37:48656.service: Deactivated successfully. May 10 00:24:08.016653 systemd[1]: Started sshd@208-91.107.204.139:22-60.164.133.37:50074.service - OpenSSH per-connection server daemon (60.164.133.37:50074). May 10 00:24:08.954600 sshd[6755]: Connection closed by authenticating user root 60.164.133.37 port 50074 [preauth] May 10 00:24:08.958248 systemd[1]: sshd@208-91.107.204.139:22-60.164.133.37:50074.service: Deactivated successfully. May 10 00:24:09.161761 systemd[1]: Started sshd@209-91.107.204.139:22-60.164.133.37:51442.service - OpenSSH per-connection server daemon (60.164.133.37:51442). May 10 00:24:10.102351 sshd[6760]: Connection closed by authenticating user root 60.164.133.37 port 51442 [preauth] May 10 00:24:10.106485 systemd[1]: sshd@209-91.107.204.139:22-60.164.133.37:51442.service: Deactivated successfully. May 10 00:24:10.311739 systemd[1]: Started sshd@210-91.107.204.139:22-60.164.133.37:52804.service - OpenSSH per-connection server daemon (60.164.133.37:52804). May 10 00:24:11.265347 sshd[6765]: Connection closed by authenticating user root 60.164.133.37 port 52804 [preauth] May 10 00:24:11.268953 systemd[1]: sshd@210-91.107.204.139:22-60.164.133.37:52804.service: Deactivated successfully. May 10 00:24:11.466693 systemd[1]: Started sshd@211-91.107.204.139:22-60.164.133.37:54346.service - OpenSSH per-connection server daemon (60.164.133.37:54346). May 10 00:24:12.409386 sshd[6770]: Connection closed by authenticating user root 60.164.133.37 port 54346 [preauth] May 10 00:24:12.413809 systemd[1]: sshd@211-91.107.204.139:22-60.164.133.37:54346.service: Deactivated successfully. May 10 00:24:12.614703 systemd[1]: Started sshd@212-91.107.204.139:22-60.164.133.37:55684.service - OpenSSH per-connection server daemon (60.164.133.37:55684). May 10 00:24:13.567768 sshd[6775]: Connection closed by authenticating user root 60.164.133.37 port 55684 [preauth] May 10 00:24:13.572106 systemd[1]: sshd@212-91.107.204.139:22-60.164.133.37:55684.service: Deactivated successfully. May 10 00:24:13.772648 systemd[1]: Started sshd@213-91.107.204.139:22-60.164.133.37:57080.service - OpenSSH per-connection server daemon (60.164.133.37:57080). May 10 00:24:14.735236 sshd[6780]: Connection closed by authenticating user root 60.164.133.37 port 57080 [preauth] May 10 00:24:14.739323 systemd[1]: sshd@213-91.107.204.139:22-60.164.133.37:57080.service: Deactivated successfully. May 10 00:24:14.940931 systemd[1]: Started sshd@214-91.107.204.139:22-60.164.133.37:58442.service - OpenSSH per-connection server daemon (60.164.133.37:58442). May 10 00:24:15.890024 sshd[6785]: Connection closed by authenticating user root 60.164.133.37 port 58442 [preauth] May 10 00:24:15.893709 systemd[1]: sshd@214-91.107.204.139:22-60.164.133.37:58442.service: Deactivated successfully. May 10 00:24:16.086554 systemd[1]: Started sshd@215-91.107.204.139:22-60.164.133.37:59788.service - OpenSSH per-connection server daemon (60.164.133.37:59788). May 10 00:24:17.044718 sshd[6790]: Connection closed by authenticating user root 60.164.133.37 port 59788 [preauth] May 10 00:24:17.047882 systemd[1]: sshd@215-91.107.204.139:22-60.164.133.37:59788.service: Deactivated successfully. May 10 00:24:17.247772 systemd[1]: Started sshd@216-91.107.204.139:22-60.164.133.37:32964.service - OpenSSH per-connection server daemon (60.164.133.37:32964). May 10 00:24:18.200160 sshd[6795]: Connection closed by authenticating user root 60.164.133.37 port 32964 [preauth] May 10 00:24:18.203607 systemd[1]: sshd@216-91.107.204.139:22-60.164.133.37:32964.service: Deactivated successfully. May 10 00:24:18.400706 systemd[1]: Started sshd@217-91.107.204.139:22-60.164.133.37:34476.service - OpenSSH per-connection server daemon (60.164.133.37:34476). May 10 00:24:19.346837 sshd[6800]: Connection closed by authenticating user root 60.164.133.37 port 34476 [preauth] May 10 00:24:19.349645 systemd[1]: sshd@217-91.107.204.139:22-60.164.133.37:34476.service: Deactivated successfully. May 10 00:24:19.557072 systemd[1]: Started sshd@218-91.107.204.139:22-60.164.133.37:35900.service - OpenSSH per-connection server daemon (60.164.133.37:35900). May 10 00:24:20.511854 sshd[6807]: Connection closed by authenticating user root 60.164.133.37 port 35900 [preauth] May 10 00:24:20.514982 systemd[1]: sshd@218-91.107.204.139:22-60.164.133.37:35900.service: Deactivated successfully. May 10 00:24:20.709754 systemd[1]: Started sshd@219-91.107.204.139:22-60.164.133.37:37404.service - OpenSSH per-connection server daemon (60.164.133.37:37404). May 10 00:24:21.654752 sshd[6816]: Connection closed by authenticating user root 60.164.133.37 port 37404 [preauth] May 10 00:24:21.657880 systemd[1]: sshd@219-91.107.204.139:22-60.164.133.37:37404.service: Deactivated successfully. May 10 00:24:21.857704 systemd[1]: Started sshd@220-91.107.204.139:22-60.164.133.37:38752.service - OpenSSH per-connection server daemon (60.164.133.37:38752). May 10 00:24:22.807043 sshd[6822]: Connection closed by authenticating user root 60.164.133.37 port 38752 [preauth] May 10 00:24:22.810764 systemd[1]: sshd@220-91.107.204.139:22-60.164.133.37:38752.service: Deactivated successfully. May 10 00:24:23.011046 systemd[1]: Started sshd@221-91.107.204.139:22-60.164.133.37:40168.service - OpenSSH per-connection server daemon (60.164.133.37:40168). May 10 00:24:23.955091 sshd[6828]: Connection closed by authenticating user root 60.164.133.37 port 40168 [preauth] May 10 00:24:23.958703 systemd[1]: sshd@221-91.107.204.139:22-60.164.133.37:40168.service: Deactivated successfully. May 10 00:24:24.264631 systemd[1]: Started sshd@222-91.107.204.139:22-60.164.133.37:41576.service - OpenSSH per-connection server daemon (60.164.133.37:41576). May 10 00:24:25.489851 sshd[6833]: Connection closed by authenticating user root 60.164.133.37 port 41576 [preauth] May 10 00:24:25.493068 systemd[1]: sshd@222-91.107.204.139:22-60.164.133.37:41576.service: Deactivated successfully. May 10 00:24:25.637678 systemd[1]: Started sshd@223-91.107.204.139:22-60.164.133.37:43440.service - OpenSSH per-connection server daemon (60.164.133.37:43440). May 10 00:24:26.576829 sshd[6838]: Connection closed by authenticating user root 60.164.133.37 port 43440 [preauth] May 10 00:24:26.580729 systemd[1]: sshd@223-91.107.204.139:22-60.164.133.37:43440.service: Deactivated successfully. May 10 00:24:26.784650 systemd[1]: Started sshd@224-91.107.204.139:22-60.164.133.37:44760.service - OpenSSH per-connection server daemon (60.164.133.37:44760). May 10 00:24:27.727276 sshd[6843]: Connection closed by authenticating user root 60.164.133.37 port 44760 [preauth] May 10 00:24:27.730578 systemd[1]: sshd@224-91.107.204.139:22-60.164.133.37:44760.service: Deactivated successfully. May 10 00:24:27.934174 systemd[1]: Started sshd@225-91.107.204.139:22-60.164.133.37:46038.service - OpenSSH per-connection server daemon (60.164.133.37:46038). May 10 00:24:28.884267 sshd[6848]: Connection closed by authenticating user root 60.164.133.37 port 46038 [preauth] May 10 00:24:28.886849 systemd[1]: sshd@225-91.107.204.139:22-60.164.133.37:46038.service: Deactivated successfully. May 10 00:24:29.089569 systemd[1]: Started sshd@226-91.107.204.139:22-60.164.133.37:47406.service - OpenSSH per-connection server daemon (60.164.133.37:47406). May 10 00:24:30.034582 sshd[6853]: Connection closed by authenticating user root 60.164.133.37 port 47406 [preauth] May 10 00:24:30.038651 systemd[1]: sshd@226-91.107.204.139:22-60.164.133.37:47406.service: Deactivated successfully. May 10 00:24:30.248654 systemd[1]: Started sshd@227-91.107.204.139:22-60.164.133.37:48960.service - OpenSSH per-connection server daemon (60.164.133.37:48960). May 10 00:24:31.194363 sshd[6858]: Connection closed by authenticating user root 60.164.133.37 port 48960 [preauth] May 10 00:24:31.198354 systemd[1]: sshd@227-91.107.204.139:22-60.164.133.37:48960.service: Deactivated successfully. May 10 00:24:31.402882 systemd[1]: Started sshd@228-91.107.204.139:22-60.164.133.37:50458.service - OpenSSH per-connection server daemon (60.164.133.37:50458). May 10 00:24:32.353448 sshd[6863]: Connection closed by authenticating user root 60.164.133.37 port 50458 [preauth] May 10 00:24:32.356259 systemd[1]: sshd@228-91.107.204.139:22-60.164.133.37:50458.service: Deactivated successfully. May 10 00:24:32.558730 systemd[1]: Started sshd@229-91.107.204.139:22-60.164.133.37:51932.service - OpenSSH per-connection server daemon (60.164.133.37:51932). May 10 00:24:33.513568 sshd[6868]: Connection closed by authenticating user root 60.164.133.37 port 51932 [preauth] May 10 00:24:33.516681 systemd[1]: sshd@229-91.107.204.139:22-60.164.133.37:51932.service: Deactivated successfully. May 10 00:24:33.715778 systemd[1]: Started sshd@230-91.107.204.139:22-60.164.133.37:53240.service - OpenSSH per-connection server daemon (60.164.133.37:53240). May 10 00:24:34.652951 sshd[6897]: Connection closed by authenticating user root 60.164.133.37 port 53240 [preauth] May 10 00:24:34.656125 systemd[1]: sshd@230-91.107.204.139:22-60.164.133.37:53240.service: Deactivated successfully. May 10 00:24:34.863715 systemd[1]: Started sshd@231-91.107.204.139:22-60.164.133.37:54566.service - OpenSSH per-connection server daemon (60.164.133.37:54566). May 10 00:24:35.823749 sshd[6902]: Connection closed by authenticating user root 60.164.133.37 port 54566 [preauth] May 10 00:24:35.826713 systemd[1]: sshd@231-91.107.204.139:22-60.164.133.37:54566.service: Deactivated successfully. May 10 00:24:36.030723 systemd[1]: Started sshd@232-91.107.204.139:22-60.164.133.37:56018.service - OpenSSH per-connection server daemon (60.164.133.37:56018). May 10 00:24:36.970147 sshd[6926]: Connection closed by authenticating user root 60.164.133.37 port 56018 [preauth] May 10 00:24:36.973517 systemd[1]: sshd@232-91.107.204.139:22-60.164.133.37:56018.service: Deactivated successfully. May 10 00:24:37.172674 systemd[1]: Started sshd@233-91.107.204.139:22-60.164.133.37:57460.service - OpenSSH per-connection server daemon (60.164.133.37:57460). May 10 00:24:38.141302 sshd[6931]: Connection closed by authenticating user root 60.164.133.37 port 57460 [preauth] May 10 00:24:38.143264 systemd[1]: sshd@233-91.107.204.139:22-60.164.133.37:57460.service: Deactivated successfully. May 10 00:24:38.342768 systemd[1]: Started sshd@234-91.107.204.139:22-60.164.133.37:58830.service - OpenSSH per-connection server daemon (60.164.133.37:58830). May 10 00:24:39.303136 sshd[6936]: Connection closed by authenticating user root 60.164.133.37 port 58830 [preauth] May 10 00:24:39.305724 systemd[1]: sshd@234-91.107.204.139:22-60.164.133.37:58830.service: Deactivated successfully. May 10 00:24:39.511943 systemd[1]: Started sshd@235-91.107.204.139:22-60.164.133.37:60094.service - OpenSSH per-connection server daemon (60.164.133.37:60094). May 10 00:24:40.477718 sshd[6941]: Connection closed by authenticating user root 60.164.133.37 port 60094 [preauth] May 10 00:24:40.481792 systemd[1]: sshd@235-91.107.204.139:22-60.164.133.37:60094.service: Deactivated successfully. May 10 00:24:40.681864 systemd[1]: Started sshd@236-91.107.204.139:22-60.164.133.37:33362.service - OpenSSH per-connection server daemon (60.164.133.37:33362). May 10 00:24:41.620675 sshd[6946]: Connection closed by authenticating user root 60.164.133.37 port 33362 [preauth] May 10 00:24:41.624416 systemd[1]: sshd@236-91.107.204.139:22-60.164.133.37:33362.service: Deactivated successfully. May 10 00:24:41.828736 systemd[1]: Started sshd@237-91.107.204.139:22-60.164.133.37:34948.service - OpenSSH per-connection server daemon (60.164.133.37:34948). May 10 00:24:42.777208 sshd[6951]: Connection closed by authenticating user root 60.164.133.37 port 34948 [preauth] May 10 00:24:42.780693 systemd[1]: sshd@237-91.107.204.139:22-60.164.133.37:34948.service: Deactivated successfully. May 10 00:24:42.987616 systemd[1]: Started sshd@238-91.107.204.139:22-60.164.133.37:36422.service - OpenSSH per-connection server daemon (60.164.133.37:36422). May 10 00:24:43.938850 sshd[6956]: Connection closed by authenticating user root 60.164.133.37 port 36422 [preauth] May 10 00:24:43.941391 systemd[1]: sshd@238-91.107.204.139:22-60.164.133.37:36422.service: Deactivated successfully. May 10 00:24:44.146418 systemd[1]: Started sshd@239-91.107.204.139:22-60.164.133.37:37854.service - OpenSSH per-connection server daemon (60.164.133.37:37854). May 10 00:24:45.101264 sshd[6961]: Connection closed by authenticating user root 60.164.133.37 port 37854 [preauth] May 10 00:24:45.104179 systemd[1]: sshd@239-91.107.204.139:22-60.164.133.37:37854.service: Deactivated successfully. May 10 00:24:45.304746 systemd[1]: Started sshd@240-91.107.204.139:22-60.164.133.37:39286.service - OpenSSH per-connection server daemon (60.164.133.37:39286). May 10 00:24:46.254675 sshd[6966]: Connection closed by authenticating user root 60.164.133.37 port 39286 [preauth] May 10 00:24:46.255433 systemd[1]: sshd@240-91.107.204.139:22-60.164.133.37:39286.service: Deactivated successfully. May 10 00:24:46.453656 systemd[1]: Started sshd@241-91.107.204.139:22-60.164.133.37:40632.service - OpenSSH per-connection server daemon (60.164.133.37:40632). May 10 00:24:47.409372 sshd[6971]: Connection closed by authenticating user root 60.164.133.37 port 40632 [preauth] May 10 00:24:47.412978 systemd[1]: sshd@241-91.107.204.139:22-60.164.133.37:40632.service: Deactivated successfully. May 10 00:24:47.608598 systemd[1]: Started sshd@242-91.107.204.139:22-60.164.133.37:42106.service - OpenSSH per-connection server daemon (60.164.133.37:42106). May 10 00:24:48.548023 sshd[6976]: Connection closed by authenticating user root 60.164.133.37 port 42106 [preauth] May 10 00:24:48.551838 systemd[1]: sshd@242-91.107.204.139:22-60.164.133.37:42106.service: Deactivated successfully. May 10 00:24:48.752638 systemd[1]: Started sshd@243-91.107.204.139:22-60.164.133.37:43308.service - OpenSSH per-connection server daemon (60.164.133.37:43308). May 10 00:24:49.688514 sshd[6983]: Connection closed by authenticating user root 60.164.133.37 port 43308 [preauth] May 10 00:24:49.692402 systemd[1]: sshd@243-91.107.204.139:22-60.164.133.37:43308.service: Deactivated successfully. May 10 00:24:49.898787 systemd[1]: Started sshd@244-91.107.204.139:22-60.164.133.37:44704.service - OpenSSH per-connection server daemon (60.164.133.37:44704). May 10 00:24:50.855370 sshd[6988]: Connection closed by authenticating user root 60.164.133.37 port 44704 [preauth] May 10 00:24:50.859141 systemd[1]: sshd@244-91.107.204.139:22-60.164.133.37:44704.service: Deactivated successfully. May 10 00:24:51.056968 systemd[1]: Started sshd@245-91.107.204.139:22-60.164.133.37:46206.service - OpenSSH per-connection server daemon (60.164.133.37:46206). May 10 00:24:52.003353 sshd[6993]: Connection closed by authenticating user root 60.164.133.37 port 46206 [preauth] May 10 00:24:52.007094 systemd[1]: sshd@245-91.107.204.139:22-60.164.133.37:46206.service: Deactivated successfully. May 10 00:24:52.207033 systemd[1]: Started sshd@246-91.107.204.139:22-60.164.133.37:47616.service - OpenSSH per-connection server daemon (60.164.133.37:47616). May 10 00:24:53.162930 sshd[6998]: Connection closed by authenticating user root 60.164.133.37 port 47616 [preauth] May 10 00:24:53.164509 systemd[1]: sshd@246-91.107.204.139:22-60.164.133.37:47616.service: Deactivated successfully. May 10 00:24:53.374710 systemd[1]: Started sshd@247-91.107.204.139:22-60.164.133.37:49116.service - OpenSSH per-connection server daemon (60.164.133.37:49116). May 10 00:24:54.359329 sshd[7003]: Connection closed by authenticating user root 60.164.133.37 port 49116 [preauth] May 10 00:24:54.362682 systemd[1]: sshd@247-91.107.204.139:22-60.164.133.37:49116.service: Deactivated successfully. May 10 00:24:54.554047 systemd[1]: Started sshd@248-91.107.204.139:22-60.164.133.37:50682.service - OpenSSH per-connection server daemon (60.164.133.37:50682). May 10 00:24:55.520486 sshd[7008]: Connection closed by authenticating user root 60.164.133.37 port 50682 [preauth] May 10 00:24:55.524246 systemd[1]: sshd@248-91.107.204.139:22-60.164.133.37:50682.service: Deactivated successfully. May 10 00:24:55.731781 systemd[1]: Started sshd@249-91.107.204.139:22-60.164.133.37:52284.service - OpenSSH per-connection server daemon (60.164.133.37:52284). May 10 00:24:56.687867 sshd[7013]: Connection closed by authenticating user root 60.164.133.37 port 52284 [preauth] May 10 00:24:56.691537 systemd[1]: sshd@249-91.107.204.139:22-60.164.133.37:52284.service: Deactivated successfully. May 10 00:24:56.891589 systemd[1]: Started sshd@250-91.107.204.139:22-60.164.133.37:53614.service - OpenSSH per-connection server daemon (60.164.133.37:53614). May 10 00:24:57.848671 sshd[7018]: Connection closed by authenticating user root 60.164.133.37 port 53614 [preauth] May 10 00:24:57.852687 systemd[1]: sshd@250-91.107.204.139:22-60.164.133.37:53614.service: Deactivated successfully. May 10 00:24:58.158606 systemd[1]: Started sshd@251-91.107.204.139:22-60.164.133.37:54982.service - OpenSSH per-connection server daemon (60.164.133.37:54982). May 10 00:24:59.369504 sshd[7040]: Connection closed by authenticating user root 60.164.133.37 port 54982 [preauth] May 10 00:24:59.372774 systemd[1]: sshd@251-91.107.204.139:22-60.164.133.37:54982.service: Deactivated successfully. May 10 00:24:59.522691 systemd[1]: Started sshd@252-91.107.204.139:22-60.164.133.37:56906.service - OpenSSH per-connection server daemon (60.164.133.37:56906). May 10 00:25:00.459356 sshd[7045]: Connection closed by authenticating user root 60.164.133.37 port 56906 [preauth] May 10 00:25:00.463343 systemd[1]: sshd@252-91.107.204.139:22-60.164.133.37:56906.service: Deactivated successfully. May 10 00:25:00.672753 systemd[1]: Started sshd@253-91.107.204.139:22-60.164.133.37:58184.service - OpenSSH per-connection server daemon (60.164.133.37:58184). May 10 00:25:01.616562 sshd[7050]: Connection closed by authenticating user root 60.164.133.37 port 58184 [preauth] May 10 00:25:01.620200 systemd[1]: sshd@253-91.107.204.139:22-60.164.133.37:58184.service: Deactivated successfully. May 10 00:25:01.816705 systemd[1]: Started sshd@254-91.107.204.139:22-60.164.133.37:59590.service - OpenSSH per-connection server daemon (60.164.133.37:59590). May 10 00:25:02.753342 sshd[7055]: Connection closed by authenticating user root 60.164.133.37 port 59590 [preauth] May 10 00:25:02.756926 systemd[1]: sshd@254-91.107.204.139:22-60.164.133.37:59590.service: Deactivated successfully. May 10 00:25:02.956695 systemd[1]: Started sshd@255-91.107.204.139:22-60.164.133.37:32930.service - OpenSSH per-connection server daemon (60.164.133.37:32930). May 10 00:25:03.899946 sshd[7068]: Connection closed by authenticating user root 60.164.133.37 port 32930 [preauth] May 10 00:25:03.910164 systemd[1]: sshd@255-91.107.204.139:22-60.164.133.37:32930.service: Deactivated successfully. May 10 00:25:04.100733 systemd[1]: Started sshd@256-91.107.204.139:22-60.164.133.37:34276.service - OpenSSH per-connection server daemon (60.164.133.37:34276). May 10 00:25:05.041538 sshd[7095]: Connection closed by authenticating user root 60.164.133.37 port 34276 [preauth] May 10 00:25:05.044914 systemd[1]: sshd@256-91.107.204.139:22-60.164.133.37:34276.service: Deactivated successfully. May 10 00:25:05.252728 systemd[1]: Started sshd@257-91.107.204.139:22-60.164.133.37:35718.service - OpenSSH per-connection server daemon (60.164.133.37:35718). May 10 00:25:06.208271 sshd[7106]: Connection closed by authenticating user root 60.164.133.37 port 35718 [preauth] May 10 00:25:06.211874 systemd[1]: sshd@257-91.107.204.139:22-60.164.133.37:35718.service: Deactivated successfully. May 10 00:25:06.417023 systemd[1]: Started sshd@258-91.107.204.139:22-60.164.133.37:37320.service - OpenSSH per-connection server daemon (60.164.133.37:37320). May 10 00:25:07.373571 sshd[7125]: Connection closed by authenticating user root 60.164.133.37 port 37320 [preauth] May 10 00:25:07.377673 systemd[1]: sshd@258-91.107.204.139:22-60.164.133.37:37320.service: Deactivated successfully. May 10 00:25:07.572608 systemd[1]: Started sshd@259-91.107.204.139:22-60.164.133.37:38826.service - OpenSSH per-connection server daemon (60.164.133.37:38826). May 10 00:25:08.517200 sshd[7130]: Connection closed by authenticating user root 60.164.133.37 port 38826 [preauth] May 10 00:25:08.519533 systemd[1]: sshd@259-91.107.204.139:22-60.164.133.37:38826.service: Deactivated successfully. May 10 00:25:08.717778 systemd[1]: Started sshd@260-91.107.204.139:22-60.164.133.37:40114.service - OpenSSH per-connection server daemon (60.164.133.37:40114). May 10 00:25:09.646324 sshd[7135]: Connection closed by authenticating user root 60.164.133.37 port 40114 [preauth] May 10 00:25:09.649376 systemd[1]: sshd@260-91.107.204.139:22-60.164.133.37:40114.service: Deactivated successfully. May 10 00:25:09.847187 systemd[1]: Started sshd@261-91.107.204.139:22-60.164.133.37:41400.service - OpenSSH per-connection server daemon (60.164.133.37:41400). May 10 00:25:10.798828 sshd[7140]: Connection closed by authenticating user root 60.164.133.37 port 41400 [preauth] May 10 00:25:10.802402 systemd[1]: sshd@261-91.107.204.139:22-60.164.133.37:41400.service: Deactivated successfully. May 10 00:25:11.012679 systemd[1]: Started sshd@262-91.107.204.139:22-60.164.133.37:42746.service - OpenSSH per-connection server daemon (60.164.133.37:42746). May 10 00:25:11.972262 sshd[7145]: Connection closed by authenticating user root 60.164.133.37 port 42746 [preauth] May 10 00:25:11.975150 systemd[1]: sshd@262-91.107.204.139:22-60.164.133.37:42746.service: Deactivated successfully. May 10 00:25:12.172954 systemd[1]: Started sshd@263-91.107.204.139:22-60.164.133.37:44176.service - OpenSSH per-connection server daemon (60.164.133.37:44176). May 10 00:25:13.120728 sshd[7150]: Connection closed by authenticating user root 60.164.133.37 port 44176 [preauth] May 10 00:25:13.124241 systemd[1]: sshd@263-91.107.204.139:22-60.164.133.37:44176.service: Deactivated successfully. May 10 00:25:13.326917 systemd[1]: Started sshd@264-91.107.204.139:22-60.164.133.37:45614.service - OpenSSH per-connection server daemon (60.164.133.37:45614). May 10 00:25:14.285280 sshd[7155]: Connection closed by authenticating user root 60.164.133.37 port 45614 [preauth] May 10 00:25:14.289155 systemd[1]: sshd@264-91.107.204.139:22-60.164.133.37:45614.service: Deactivated successfully. May 10 00:25:14.494656 systemd[1]: Started sshd@265-91.107.204.139:22-60.164.133.37:47112.service - OpenSSH per-connection server daemon (60.164.133.37:47112). May 10 00:25:15.450249 sshd[7160]: Connection closed by authenticating user root 60.164.133.37 port 47112 [preauth] May 10 00:25:15.453357 systemd[1]: sshd@265-91.107.204.139:22-60.164.133.37:47112.service: Deactivated successfully. May 10 00:25:15.656807 systemd[1]: Started sshd@266-91.107.204.139:22-60.164.133.37:48442.service - OpenSSH per-connection server daemon (60.164.133.37:48442). May 10 00:25:16.609907 sshd[7165]: Connection closed by authenticating user root 60.164.133.37 port 48442 [preauth] May 10 00:25:16.613620 systemd[1]: sshd@266-91.107.204.139:22-60.164.133.37:48442.service: Deactivated successfully. May 10 00:25:16.818376 systemd[1]: Started sshd@267-91.107.204.139:22-60.164.133.37:49890.service - OpenSSH per-connection server daemon (60.164.133.37:49890). May 10 00:25:17.762855 sshd[7170]: Connection closed by authenticating user root 60.164.133.37 port 49890 [preauth] May 10 00:25:17.765208 systemd[1]: sshd@267-91.107.204.139:22-60.164.133.37:49890.service: Deactivated successfully. May 10 00:25:17.972638 systemd[1]: Started sshd@268-91.107.204.139:22-60.164.133.37:51288.service - OpenSSH per-connection server daemon (60.164.133.37:51288). May 10 00:25:18.948338 sshd[7175]: Connection closed by authenticating user root 60.164.133.37 port 51288 [preauth] May 10 00:25:18.951855 systemd[1]: sshd@268-91.107.204.139:22-60.164.133.37:51288.service: Deactivated successfully. May 10 00:25:19.148899 systemd[1]: Started sshd@269-91.107.204.139:22-60.164.133.37:52794.service - OpenSSH per-connection server daemon (60.164.133.37:52794). May 10 00:25:20.113387 sshd[7182]: Connection closed by authenticating user root 60.164.133.37 port 52794 [preauth] May 10 00:25:20.116482 systemd[1]: sshd@269-91.107.204.139:22-60.164.133.37:52794.service: Deactivated successfully. May 10 00:25:20.316915 systemd[1]: Started sshd@270-91.107.204.139:22-60.164.133.37:54284.service - OpenSSH per-connection server daemon (60.164.133.37:54284). May 10 00:25:21.258873 sshd[7187]: Connection closed by authenticating user root 60.164.133.37 port 54284 [preauth] May 10 00:25:21.262792 systemd[1]: sshd@270-91.107.204.139:22-60.164.133.37:54284.service: Deactivated successfully. May 10 00:25:21.463973 systemd[1]: Started sshd@271-91.107.204.139:22-60.164.133.37:55768.service - OpenSSH per-connection server daemon (60.164.133.37:55768). May 10 00:25:22.406925 sshd[7192]: Connection closed by authenticating user root 60.164.133.37 port 55768 [preauth] May 10 00:25:22.411071 systemd[1]: sshd@271-91.107.204.139:22-60.164.133.37:55768.service: Deactivated successfully. May 10 00:25:22.614404 systemd[1]: Started sshd@272-91.107.204.139:22-60.164.133.37:56944.service - OpenSSH per-connection server daemon (60.164.133.37:56944). May 10 00:25:23.566186 sshd[7197]: Connection closed by authenticating user root 60.164.133.37 port 56944 [preauth] May 10 00:25:23.569515 systemd[1]: sshd@272-91.107.204.139:22-60.164.133.37:56944.service: Deactivated successfully. May 10 00:25:23.771766 systemd[1]: Started sshd@273-91.107.204.139:22-60.164.133.37:58282.service - OpenSSH per-connection server daemon (60.164.133.37:58282). May 10 00:25:24.712735 sshd[7202]: Connection closed by authenticating user root 60.164.133.37 port 58282 [preauth] May 10 00:25:24.717012 systemd[1]: sshd@273-91.107.204.139:22-60.164.133.37:58282.service: Deactivated successfully. May 10 00:25:24.928080 systemd[1]: Started sshd@274-91.107.204.139:22-60.164.133.37:59840.service - OpenSSH per-connection server daemon (60.164.133.37:59840). May 10 00:25:25.894237 sshd[7207]: Connection closed by authenticating user root 60.164.133.37 port 59840 [preauth] May 10 00:25:25.898029 systemd[1]: sshd@274-91.107.204.139:22-60.164.133.37:59840.service: Deactivated successfully. May 10 00:25:26.105742 systemd[1]: Started sshd@275-91.107.204.139:22-60.164.133.37:33094.service - OpenSSH per-connection server daemon (60.164.133.37:33094). May 10 00:25:27.044949 sshd[7212]: Connection closed by authenticating user root 60.164.133.37 port 33094 [preauth] May 10 00:25:27.048376 systemd[1]: sshd@275-91.107.204.139:22-60.164.133.37:33094.service: Deactivated successfully. May 10 00:25:27.251142 systemd[1]: Started sshd@276-91.107.204.139:22-60.164.133.37:34418.service - OpenSSH per-connection server daemon (60.164.133.37:34418). May 10 00:25:28.194381 sshd[7217]: Connection closed by authenticating user root 60.164.133.37 port 34418 [preauth] May 10 00:25:28.197445 systemd[1]: sshd@276-91.107.204.139:22-60.164.133.37:34418.service: Deactivated successfully. May 10 00:25:28.404146 systemd[1]: Started sshd@277-91.107.204.139:22-60.164.133.37:35698.service - OpenSSH per-connection server daemon (60.164.133.37:35698). May 10 00:25:29.373350 sshd[7234]: Connection closed by authenticating user root 60.164.133.37 port 35698 [preauth] May 10 00:25:29.375275 systemd[1]: sshd@277-91.107.204.139:22-60.164.133.37:35698.service: Deactivated successfully. May 10 00:25:29.579820 systemd[1]: Started sshd@278-91.107.204.139:22-60.164.133.37:37232.service - OpenSSH per-connection server daemon (60.164.133.37:37232). May 10 00:25:30.519230 sshd[7239]: Connection closed by authenticating user root 60.164.133.37 port 37232 [preauth] May 10 00:25:30.522984 systemd[1]: sshd@278-91.107.204.139:22-60.164.133.37:37232.service: Deactivated successfully. May 10 00:25:30.722862 systemd[1]: Started sshd@279-91.107.204.139:22-60.164.133.37:38802.service - OpenSSH per-connection server daemon (60.164.133.37:38802). May 10 00:25:31.672421 sshd[7244]: Connection closed by authenticating user root 60.164.133.37 port 38802 [preauth] May 10 00:25:31.676652 systemd[1]: sshd@279-91.107.204.139:22-60.164.133.37:38802.service: Deactivated successfully. May 10 00:25:31.882181 systemd[1]: Started sshd@280-91.107.204.139:22-60.164.133.37:40208.service - OpenSSH per-connection server daemon (60.164.133.37:40208). May 10 00:25:32.844828 sshd[7254]: Connection closed by authenticating user root 60.164.133.37 port 40208 [preauth] May 10 00:25:32.848113 systemd[1]: sshd@280-91.107.204.139:22-60.164.133.37:40208.service: Deactivated successfully. May 10 00:25:33.043665 systemd[1]: Started sshd@281-91.107.204.139:22-60.164.133.37:41640.service - OpenSSH per-connection server daemon (60.164.133.37:41640). May 10 00:25:33.979712 sshd[7259]: Connection closed by authenticating user root 60.164.133.37 port 41640 [preauth] May 10 00:25:33.983119 systemd[1]: sshd@281-91.107.204.139:22-60.164.133.37:41640.service: Deactivated successfully. May 10 00:25:34.183640 systemd[1]: Started sshd@282-91.107.204.139:22-60.164.133.37:42906.service - OpenSSH per-connection server daemon (60.164.133.37:42906). May 10 00:25:35.136090 sshd[7288]: Connection closed by authenticating user root 60.164.133.37 port 42906 [preauth] May 10 00:25:35.139576 systemd[1]: sshd@282-91.107.204.139:22-60.164.133.37:42906.service: Deactivated successfully. May 10 00:25:35.335629 systemd[1]: Started sshd@283-91.107.204.139:22-60.164.133.37:44238.service - OpenSSH per-connection server daemon (60.164.133.37:44238). May 10 00:25:36.279963 sshd[7311]: Connection closed by authenticating user root 60.164.133.37 port 44238 [preauth] May 10 00:25:36.284146 systemd[1]: sshd@283-91.107.204.139:22-60.164.133.37:44238.service: Deactivated successfully. May 10 00:25:36.485364 systemd[1]: Started sshd@284-91.107.204.139:22-60.164.133.37:45862.service - OpenSSH per-connection server daemon (60.164.133.37:45862). May 10 00:25:37.422839 sshd[7316]: Connection closed by authenticating user root 60.164.133.37 port 45862 [preauth] May 10 00:25:37.426514 systemd[1]: sshd@284-91.107.204.139:22-60.164.133.37:45862.service: Deactivated successfully. May 10 00:25:37.623733 systemd[1]: Started sshd@285-91.107.204.139:22-60.164.133.37:47354.service - OpenSSH per-connection server daemon (60.164.133.37:47354). May 10 00:25:38.562480 sshd[7321]: Connection closed by authenticating user root 60.164.133.37 port 47354 [preauth] May 10 00:25:38.565988 systemd[1]: sshd@285-91.107.204.139:22-60.164.133.37:47354.service: Deactivated successfully. May 10 00:25:38.769749 systemd[1]: Started sshd@286-91.107.204.139:22-60.164.133.37:48572.service - OpenSSH per-connection server daemon (60.164.133.37:48572). May 10 00:25:39.718265 sshd[7326]: Connection closed by authenticating user root 60.164.133.37 port 48572 [preauth] May 10 00:25:39.720314 systemd[1]: sshd@286-91.107.204.139:22-60.164.133.37:48572.service: Deactivated successfully. May 10 00:25:39.919736 systemd[1]: Started sshd@287-91.107.204.139:22-60.164.133.37:49920.service - OpenSSH per-connection server daemon (60.164.133.37:49920). May 10 00:25:40.855848 sshd[7331]: Connection closed by authenticating user root 60.164.133.37 port 49920 [preauth] May 10 00:25:40.859123 systemd[1]: sshd@287-91.107.204.139:22-60.164.133.37:49920.service: Deactivated successfully. May 10 00:25:41.065970 systemd[1]: Started sshd@288-91.107.204.139:22-60.164.133.37:51528.service - OpenSSH per-connection server daemon (60.164.133.37:51528). May 10 00:25:42.018220 sshd[7336]: Connection closed by authenticating user root 60.164.133.37 port 51528 [preauth] May 10 00:25:42.021521 systemd[1]: sshd@288-91.107.204.139:22-60.164.133.37:51528.service: Deactivated successfully. May 10 00:25:42.219441 systemd[1]: Started sshd@289-91.107.204.139:22-60.164.133.37:52878.service - OpenSSH per-connection server daemon (60.164.133.37:52878). May 10 00:25:43.170660 sshd[7341]: Connection closed by authenticating user root 60.164.133.37 port 52878 [preauth] May 10 00:25:43.169826 systemd[1]: sshd@289-91.107.204.139:22-60.164.133.37:52878.service: Deactivated successfully. May 10 00:25:43.362875 systemd[1]: Started sshd@290-91.107.204.139:22-60.164.133.37:54192.service - OpenSSH per-connection server daemon (60.164.133.37:54192). May 10 00:25:44.308383 sshd[7346]: Connection closed by authenticating user root 60.164.133.37 port 54192 [preauth] May 10 00:25:44.312594 systemd[1]: sshd@290-91.107.204.139:22-60.164.133.37:54192.service: Deactivated successfully. May 10 00:25:44.512648 systemd[1]: Started sshd@291-91.107.204.139:22-60.164.133.37:55682.service - OpenSSH per-connection server daemon (60.164.133.37:55682). May 10 00:25:45.450697 sshd[7351]: Connection closed by authenticating user root 60.164.133.37 port 55682 [preauth] May 10 00:25:45.454115 systemd[1]: sshd@291-91.107.204.139:22-60.164.133.37:55682.service: Deactivated successfully. May 10 00:25:45.650744 systemd[1]: Started sshd@292-91.107.204.139:22-60.164.133.37:56982.service - OpenSSH per-connection server daemon (60.164.133.37:56982). May 10 00:25:46.587699 sshd[7356]: Connection closed by authenticating user root 60.164.133.37 port 56982 [preauth] May 10 00:25:46.592244 systemd[1]: sshd@292-91.107.204.139:22-60.164.133.37:56982.service: Deactivated successfully. May 10 00:25:46.793226 systemd[1]: Started sshd@293-91.107.204.139:22-60.164.133.37:58330.service - OpenSSH per-connection server daemon (60.164.133.37:58330). May 10 00:25:47.725862 sshd[7361]: Connection closed by authenticating user root 60.164.133.37 port 58330 [preauth] May 10 00:25:47.728498 systemd[1]: sshd@293-91.107.204.139:22-60.164.133.37:58330.service: Deactivated successfully. May 10 00:25:47.944660 systemd[1]: Started sshd@294-91.107.204.139:22-60.164.133.37:59816.service - OpenSSH per-connection server daemon (60.164.133.37:59816). May 10 00:25:48.894811 sshd[7366]: Connection closed by authenticating user root 60.164.133.37 port 59816 [preauth] May 10 00:25:48.898628 systemd[1]: sshd@294-91.107.204.139:22-60.164.133.37:59816.service: Deactivated successfully. May 10 00:25:49.101764 systemd[1]: Started sshd@295-91.107.204.139:22-60.164.133.37:32912.service - OpenSSH per-connection server daemon (60.164.133.37:32912). May 10 00:25:50.035876 sshd[7373]: Connection closed by authenticating user root 60.164.133.37 port 32912 [preauth] May 10 00:25:50.038485 systemd[1]: sshd@295-91.107.204.139:22-60.164.133.37:32912.service: Deactivated successfully. May 10 00:25:50.247728 systemd[1]: Started sshd@296-91.107.204.139:22-60.164.133.37:34426.service - OpenSSH per-connection server daemon (60.164.133.37:34426). May 10 00:25:51.208931 sshd[7378]: Connection closed by authenticating user root 60.164.133.37 port 34426 [preauth] May 10 00:25:51.214750 systemd[1]: sshd@296-91.107.204.139:22-60.164.133.37:34426.service: Deactivated successfully. May 10 00:25:51.424021 systemd[1]: Started sshd@297-91.107.204.139:22-60.164.133.37:35832.service - OpenSSH per-connection server daemon (60.164.133.37:35832). May 10 00:25:52.389062 sshd[7383]: Connection closed by authenticating user root 60.164.133.37 port 35832 [preauth] May 10 00:25:52.392097 systemd[1]: sshd@297-91.107.204.139:22-60.164.133.37:35832.service: Deactivated successfully. May 10 00:25:52.598500 systemd[1]: Started sshd@298-91.107.204.139:22-60.164.133.37:37342.service - OpenSSH per-connection server daemon (60.164.133.37:37342). May 10 00:25:53.551977 sshd[7388]: Connection closed by authenticating user root 60.164.133.37 port 37342 [preauth] May 10 00:25:53.555565 systemd[1]: sshd@298-91.107.204.139:22-60.164.133.37:37342.service: Deactivated successfully. May 10 00:25:53.755733 systemd[1]: Started sshd@299-91.107.204.139:22-60.164.133.37:38862.service - OpenSSH per-connection server daemon (60.164.133.37:38862). May 10 00:25:54.689935 sshd[7393]: Connection closed by authenticating user root 60.164.133.37 port 38862 [preauth] May 10 00:25:54.693961 systemd[1]: sshd@299-91.107.204.139:22-60.164.133.37:38862.service: Deactivated successfully. May 10 00:25:54.895742 systemd[1]: Started sshd@300-91.107.204.139:22-60.164.133.37:40186.service - OpenSSH per-connection server daemon (60.164.133.37:40186). May 10 00:25:55.845883 sshd[7398]: Connection closed by authenticating user root 60.164.133.37 port 40186 [preauth] May 10 00:25:55.849125 systemd[1]: sshd@300-91.107.204.139:22-60.164.133.37:40186.service: Deactivated successfully. May 10 00:25:56.057650 systemd[1]: Started sshd@301-91.107.204.139:22-60.164.133.37:41836.service - OpenSSH per-connection server daemon (60.164.133.37:41836). May 10 00:25:57.027623 sshd[7403]: Connection closed by authenticating user root 60.164.133.37 port 41836 [preauth] May 10 00:25:57.030841 systemd[1]: sshd@301-91.107.204.139:22-60.164.133.37:41836.service: Deactivated successfully. May 10 00:25:57.226720 systemd[1]: Started sshd@302-91.107.204.139:22-60.164.133.37:43086.service - OpenSSH per-connection server daemon (60.164.133.37:43086). May 10 00:25:58.188818 sshd[7408]: Connection closed by authenticating user root 60.164.133.37 port 43086 [preauth] May 10 00:25:58.191897 systemd[1]: sshd@302-91.107.204.139:22-60.164.133.37:43086.service: Deactivated successfully. May 10 00:25:58.496799 systemd[1]: Started sshd@303-91.107.204.139:22-60.164.133.37:44336.service - OpenSSH per-connection server daemon (60.164.133.37:44336). May 10 00:25:59.705712 sshd[7432]: Connection closed by authenticating user root 60.164.133.37 port 44336 [preauth] May 10 00:25:59.709576 systemd[1]: sshd@303-91.107.204.139:22-60.164.133.37:44336.service: Deactivated successfully. May 10 00:25:59.860891 systemd[1]: Started sshd@304-91.107.204.139:22-60.164.133.37:46432.service - OpenSSH per-connection server daemon (60.164.133.37:46432). May 10 00:26:00.823408 sshd[7437]: Connection closed by authenticating user root 60.164.133.37 port 46432 [preauth] May 10 00:26:00.827634 systemd[1]: sshd@304-91.107.204.139:22-60.164.133.37:46432.service: Deactivated successfully. May 10 00:26:01.136765 systemd[1]: Started sshd@305-91.107.204.139:22-60.164.133.37:47784.service - OpenSSH per-connection server daemon (60.164.133.37:47784). May 10 00:26:02.348782 sshd[7442]: Connection closed by authenticating user root 60.164.133.37 port 47784 [preauth] May 10 00:26:02.353132 systemd[1]: sshd@305-91.107.204.139:22-60.164.133.37:47784.service: Deactivated successfully. May 10 00:26:02.498750 systemd[1]: Started sshd@306-91.107.204.139:22-60.164.133.37:49478.service - OpenSSH per-connection server daemon (60.164.133.37:49478). May 10 00:26:03.436520 sshd[7447]: Connection closed by authenticating user root 60.164.133.37 port 49478 [preauth] May 10 00:26:03.440606 systemd[1]: sshd@306-91.107.204.139:22-60.164.133.37:49478.service: Deactivated successfully. May 10 00:26:03.637846 systemd[1]: Started sshd@307-91.107.204.139:22-60.164.133.37:50898.service - OpenSSH per-connection server daemon (60.164.133.37:50898). May 10 00:26:04.596377 sshd[7472]: Connection closed by authenticating user root 60.164.133.37 port 50898 [preauth] May 10 00:26:04.600621 systemd[1]: sshd@307-91.107.204.139:22-60.164.133.37:50898.service: Deactivated successfully. May 10 00:26:04.803736 systemd[1]: Started sshd@308-91.107.204.139:22-60.164.133.37:52454.service - OpenSSH per-connection server daemon (60.164.133.37:52454). May 10 00:26:05.750376 sshd[7479]: Connection closed by authenticating user root 60.164.133.37 port 52454 [preauth] May 10 00:26:05.754152 systemd[1]: sshd@308-91.107.204.139:22-60.164.133.37:52454.service: Deactivated successfully. May 10 00:26:05.951844 systemd[1]: Started sshd@309-91.107.204.139:22-60.164.133.37:53818.service - OpenSSH per-connection server daemon (60.164.133.37:53818). May 10 00:26:06.890315 sshd[7502]: Connection closed by authenticating user root 60.164.133.37 port 53818 [preauth] May 10 00:26:06.894447 systemd[1]: sshd@309-91.107.204.139:22-60.164.133.37:53818.service: Deactivated successfully. May 10 00:26:07.101839 systemd[1]: Started sshd@310-91.107.204.139:22-60.164.133.37:55286.service - OpenSSH per-connection server daemon (60.164.133.37:55286). May 10 00:26:08.063362 sshd[7507]: Connection closed by authenticating user root 60.164.133.37 port 55286 [preauth] May 10 00:26:08.066131 systemd[1]: sshd@310-91.107.204.139:22-60.164.133.37:55286.service: Deactivated successfully. May 10 00:26:08.267733 systemd[1]: Started sshd@311-91.107.204.139:22-60.164.133.37:56886.service - OpenSSH per-connection server daemon (60.164.133.37:56886). May 10 00:26:09.212243 sshd[7512]: Connection closed by authenticating user root 60.164.133.37 port 56886 [preauth] May 10 00:26:09.216097 systemd[1]: sshd@311-91.107.204.139:22-60.164.133.37:56886.service: Deactivated successfully. May 10 00:26:09.416751 systemd[1]: Started sshd@312-91.107.204.139:22-60.164.133.37:58202.service - OpenSSH per-connection server daemon (60.164.133.37:58202). May 10 00:26:10.357595 sshd[7517]: Connection closed by authenticating user root 60.164.133.37 port 58202 [preauth] May 10 00:26:10.362591 systemd[1]: sshd@312-91.107.204.139:22-60.164.133.37:58202.service: Deactivated successfully. May 10 00:26:10.560777 systemd[1]: Started sshd@313-91.107.204.139:22-60.164.133.37:59528.service - OpenSSH per-connection server daemon (60.164.133.37:59528). May 10 00:26:11.521773 sshd[7522]: Connection closed by authenticating user root 60.164.133.37 port 59528 [preauth] May 10 00:26:11.524557 systemd[1]: sshd@313-91.107.204.139:22-60.164.133.37:59528.service: Deactivated successfully. May 10 00:26:11.728727 systemd[1]: Started sshd@314-91.107.204.139:22-60.164.133.37:32898.service - OpenSSH per-connection server daemon (60.164.133.37:32898). May 10 00:26:12.672189 sshd[7527]: Connection closed by authenticating user root 60.164.133.37 port 32898 [preauth] May 10 00:26:12.675857 systemd[1]: sshd@314-91.107.204.139:22-60.164.133.37:32898.service: Deactivated successfully. May 10 00:26:12.874759 systemd[1]: Started sshd@315-91.107.204.139:22-60.164.133.37:34254.service - OpenSSH per-connection server daemon (60.164.133.37:34254). May 10 00:26:13.807818 sshd[7532]: Connection closed by authenticating user root 60.164.133.37 port 34254 [preauth] May 10 00:26:13.810780 systemd[1]: sshd@315-91.107.204.139:22-60.164.133.37:34254.service: Deactivated successfully. May 10 00:26:14.018643 systemd[1]: Started sshd@316-91.107.204.139:22-60.164.133.37:35510.service - OpenSSH per-connection server daemon (60.164.133.37:35510). May 10 00:26:14.387712 systemd[1]: Started sshd@317-91.107.204.139:22-147.75.109.163:39898.service - OpenSSH per-connection server daemon (147.75.109.163:39898). May 10 00:26:14.984744 sshd[7538]: Connection closed by authenticating user root 60.164.133.37 port 35510 [preauth] May 10 00:26:14.987009 systemd[1]: sshd@316-91.107.204.139:22-60.164.133.37:35510.service: Deactivated successfully. May 10 00:26:15.191260 systemd[1]: Started sshd@318-91.107.204.139:22-60.164.133.37:36990.service - OpenSSH per-connection server daemon (60.164.133.37:36990). May 10 00:26:15.387668 sshd[7543]: Accepted publickey for core from 147.75.109.163 port 39898 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:15.390286 sshd[7543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:15.398655 systemd-logind[1458]: New session 8 of user core. May 10 00:26:15.404532 systemd[1]: Started session-8.scope - Session 8 of User core. May 10 00:26:16.152887 sshd[7549]: Connection closed by authenticating user root 60.164.133.37 port 36990 [preauth] May 10 00:26:16.155914 systemd[1]: sshd@318-91.107.204.139:22-60.164.133.37:36990.service: Deactivated successfully. May 10 00:26:16.175447 sshd[7543]: pam_unix(sshd:session): session closed for user core May 10 00:26:16.179550 systemd[1]: sshd@317-91.107.204.139:22-147.75.109.163:39898.service: Deactivated successfully. May 10 00:26:16.182161 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:26:16.184710 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. May 10 00:26:16.186371 systemd-logind[1458]: Removed session 8. May 10 00:26:16.361791 systemd[1]: Started sshd@319-91.107.204.139:22-60.164.133.37:38634.service - OpenSSH per-connection server daemon (60.164.133.37:38634). May 10 00:26:17.314761 sshd[7565]: Connection closed by authenticating user root 60.164.133.37 port 38634 [preauth] May 10 00:26:17.317747 systemd[1]: sshd@319-91.107.204.139:22-60.164.133.37:38634.service: Deactivated successfully. May 10 00:26:17.524664 systemd[1]: Started sshd@320-91.107.204.139:22-60.164.133.37:40080.service - OpenSSH per-connection server daemon (60.164.133.37:40080). May 10 00:26:18.493227 sshd[7570]: Connection closed by authenticating user root 60.164.133.37 port 40080 [preauth] May 10 00:26:18.496559 systemd[1]: sshd@320-91.107.204.139:22-60.164.133.37:40080.service: Deactivated successfully. May 10 00:26:18.694697 systemd[1]: Started sshd@321-91.107.204.139:22-60.164.133.37:41496.service - OpenSSH per-connection server daemon (60.164.133.37:41496). May 10 00:26:19.646134 sshd[7577]: Connection closed by authenticating user root 60.164.133.37 port 41496 [preauth] May 10 00:26:19.648994 systemd[1]: sshd@321-91.107.204.139:22-60.164.133.37:41496.service: Deactivated successfully. May 10 00:26:19.847735 systemd[1]: Started sshd@322-91.107.204.139:22-60.164.133.37:43044.service - OpenSSH per-connection server daemon (60.164.133.37:43044). May 10 00:26:20.791092 sshd[7582]: Connection closed by authenticating user root 60.164.133.37 port 43044 [preauth] May 10 00:26:20.794608 systemd[1]: sshd@322-91.107.204.139:22-60.164.133.37:43044.service: Deactivated successfully. May 10 00:26:20.989224 systemd[1]: Started sshd@323-91.107.204.139:22-60.164.133.37:44404.service - OpenSSH per-connection server daemon (60.164.133.37:44404). May 10 00:26:21.359847 systemd[1]: Started sshd@324-91.107.204.139:22-147.75.109.163:57938.service - OpenSSH per-connection server daemon (147.75.109.163:57938). May 10 00:26:21.924487 sshd[7587]: Connection closed by authenticating user root 60.164.133.37 port 44404 [preauth] May 10 00:26:21.928809 systemd[1]: sshd@323-91.107.204.139:22-60.164.133.37:44404.service: Deactivated successfully. May 10 00:26:22.130886 systemd[1]: Started sshd@325-91.107.204.139:22-60.164.133.37:45636.service - OpenSSH per-connection server daemon (60.164.133.37:45636). May 10 00:26:22.375707 sshd[7590]: Accepted publickey for core from 147.75.109.163 port 57938 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:22.378223 sshd[7590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:22.385499 systemd-logind[1458]: New session 9 of user core. May 10 00:26:22.394020 systemd[1]: Started session-9.scope - Session 9 of User core. May 10 00:26:23.077205 sshd[7595]: Connection closed by authenticating user root 60.164.133.37 port 45636 [preauth] May 10 00:26:23.080998 systemd[1]: sshd@325-91.107.204.139:22-60.164.133.37:45636.service: Deactivated successfully. May 10 00:26:23.152662 sshd[7590]: pam_unix(sshd:session): session closed for user core May 10 00:26:23.158747 systemd[1]: sshd@324-91.107.204.139:22-147.75.109.163:57938.service: Deactivated successfully. May 10 00:26:23.161608 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:26:23.164867 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. May 10 00:26:23.166280 systemd-logind[1458]: Removed session 9. May 10 00:26:23.277777 systemd[1]: Started sshd@326-91.107.204.139:22-60.164.133.37:47148.service - OpenSSH per-connection server daemon (60.164.133.37:47148). May 10 00:26:24.217719 sshd[7611]: Connection closed by authenticating user root 60.164.133.37 port 47148 [preauth] May 10 00:26:24.221944 systemd[1]: sshd@326-91.107.204.139:22-60.164.133.37:47148.service: Deactivated successfully. May 10 00:26:24.533775 systemd[1]: Started sshd@327-91.107.204.139:22-60.164.133.37:48608.service - OpenSSH per-connection server daemon (60.164.133.37:48608). May 10 00:26:25.751758 sshd[7616]: Connection closed by authenticating user root 60.164.133.37 port 48608 [preauth] May 10 00:26:25.754916 systemd[1]: sshd@327-91.107.204.139:22-60.164.133.37:48608.service: Deactivated successfully. May 10 00:26:25.902710 systemd[1]: Started sshd@328-91.107.204.139:22-60.164.133.37:50342.service - OpenSSH per-connection server daemon (60.164.133.37:50342). May 10 00:26:26.867792 sshd[7621]: Connection closed by authenticating user root 60.164.133.37 port 50342 [preauth] May 10 00:26:26.872577 systemd[1]: sshd@328-91.107.204.139:22-60.164.133.37:50342.service: Deactivated successfully. May 10 00:26:27.068556 systemd[1]: Started sshd@329-91.107.204.139:22-60.164.133.37:51908.service - OpenSSH per-connection server daemon (60.164.133.37:51908). May 10 00:26:28.033259 sshd[7626]: Connection closed by authenticating user root 60.164.133.37 port 51908 [preauth] May 10 00:26:28.036792 systemd[1]: sshd@329-91.107.204.139:22-60.164.133.37:51908.service: Deactivated successfully. May 10 00:26:28.230664 systemd[1]: Started sshd@330-91.107.204.139:22-60.164.133.37:53460.service - OpenSSH per-connection server daemon (60.164.133.37:53460). May 10 00:26:28.339957 systemd[1]: Started sshd@331-91.107.204.139:22-147.75.109.163:36482.service - OpenSSH per-connection server daemon (147.75.109.163:36482). May 10 00:26:29.170085 sshd[7631]: Connection closed by authenticating user root 60.164.133.37 port 53460 [preauth] May 10 00:26:29.173243 systemd[1]: sshd@330-91.107.204.139:22-60.164.133.37:53460.service: Deactivated successfully. May 10 00:26:29.353785 sshd[7634]: Accepted publickey for core from 147.75.109.163 port 36482 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:29.356151 sshd[7634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:29.370696 systemd[1]: Started sshd@332-91.107.204.139:22-60.164.133.37:54848.service - OpenSSH per-connection server daemon (60.164.133.37:54848). May 10 00:26:29.377105 systemd-logind[1458]: New session 10 of user core. May 10 00:26:29.380654 systemd[1]: Started session-10.scope - Session 10 of User core. May 10 00:26:30.133607 sshd[7634]: pam_unix(sshd:session): session closed for user core May 10 00:26:30.139680 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. May 10 00:26:30.140959 systemd[1]: sshd@331-91.107.204.139:22-147.75.109.163:36482.service: Deactivated successfully. May 10 00:26:30.143801 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:26:30.145077 systemd-logind[1458]: Removed session 10. May 10 00:26:30.312065 sshd[7639]: Connection closed by authenticating user root 60.164.133.37 port 54848 [preauth] May 10 00:26:30.314607 systemd[1]: Started sshd@333-91.107.204.139:22-147.75.109.163:36488.service - OpenSSH per-connection server daemon (147.75.109.163:36488). May 10 00:26:30.315068 systemd[1]: sshd@332-91.107.204.139:22-60.164.133.37:54848.service: Deactivated successfully. May 10 00:26:30.513865 systemd[1]: Started sshd@334-91.107.204.139:22-60.164.133.37:56268.service - OpenSSH per-connection server daemon (60.164.133.37:56268). May 10 00:26:31.326991 sshd[7652]: Accepted publickey for core from 147.75.109.163 port 36488 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:31.330194 sshd[7652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:31.335984 systemd-logind[1458]: New session 11 of user core. May 10 00:26:31.341565 systemd[1]: Started session-11.scope - Session 11 of User core. May 10 00:26:31.464979 sshd[7657]: Connection closed by authenticating user root 60.164.133.37 port 56268 [preauth] May 10 00:26:31.468136 systemd[1]: sshd@334-91.107.204.139:22-60.164.133.37:56268.service: Deactivated successfully. May 10 00:26:31.668739 systemd[1]: Started sshd@335-91.107.204.139:22-60.164.133.37:57710.service - OpenSSH per-connection server daemon (60.164.133.37:57710). May 10 00:26:32.142675 sshd[7652]: pam_unix(sshd:session): session closed for user core May 10 00:26:32.149785 systemd[1]: sshd@333-91.107.204.139:22-147.75.109.163:36488.service: Deactivated successfully. May 10 00:26:32.153815 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:26:32.156053 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. May 10 00:26:32.157950 systemd-logind[1458]: Removed session 11. May 10 00:26:32.322897 systemd[1]: Started sshd@336-91.107.204.139:22-147.75.109.163:36502.service - OpenSSH per-connection server daemon (147.75.109.163:36502). May 10 00:26:32.605380 sshd[7663]: Connection closed by authenticating user root 60.164.133.37 port 57710 [preauth] May 10 00:26:32.608479 systemd[1]: sshd@335-91.107.204.139:22-60.164.133.37:57710.service: Deactivated successfully. May 10 00:26:32.804713 systemd[1]: Started sshd@337-91.107.204.139:22-60.164.133.37:59136.service - OpenSSH per-connection server daemon (60.164.133.37:59136). May 10 00:26:33.333971 sshd[7673]: Accepted publickey for core from 147.75.109.163 port 36502 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:33.336823 sshd[7673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:33.349989 systemd-logind[1458]: New session 12 of user core. May 10 00:26:33.356638 systemd[1]: Started session-12.scope - Session 12 of User core. May 10 00:26:33.743984 sshd[7678]: Connection closed by authenticating user root 60.164.133.37 port 59136 [preauth] May 10 00:26:33.747560 systemd[1]: sshd@337-91.107.204.139:22-60.164.133.37:59136.service: Deactivated successfully. May 10 00:26:33.945825 systemd[1]: Started sshd@338-91.107.204.139:22-60.164.133.37:60464.service - OpenSSH per-connection server daemon (60.164.133.37:60464). May 10 00:26:34.107012 sshd[7673]: pam_unix(sshd:session): session closed for user core May 10 00:26:34.112823 systemd[1]: sshd@336-91.107.204.139:22-147.75.109.163:36502.service: Deactivated successfully. May 10 00:26:34.116192 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:26:34.117796 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. May 10 00:26:34.118865 systemd-logind[1458]: Removed session 12. May 10 00:26:34.889891 sshd[7718]: Connection closed by authenticating user root 60.164.133.37 port 60464 [preauth] May 10 00:26:34.893007 systemd[1]: sshd@338-91.107.204.139:22-60.164.133.37:60464.service: Deactivated successfully. May 10 00:26:35.097783 systemd[1]: Started sshd@339-91.107.204.139:22-60.164.133.37:33742.service - OpenSSH per-connection server daemon (60.164.133.37:33742). May 10 00:26:36.047911 sshd[7725]: Connection closed by authenticating user root 60.164.133.37 port 33742 [preauth] May 10 00:26:36.052652 systemd[1]: sshd@339-91.107.204.139:22-60.164.133.37:33742.service: Deactivated successfully. May 10 00:26:36.367432 systemd[1]: Started sshd@340-91.107.204.139:22-60.164.133.37:35132.service - OpenSSH per-connection server daemon (60.164.133.37:35132). May 10 00:26:37.577437 sshd[7748]: Connection closed by authenticating user root 60.164.133.37 port 35132 [preauth] May 10 00:26:37.581015 systemd[1]: sshd@340-91.107.204.139:22-60.164.133.37:35132.service: Deactivated successfully. May 10 00:26:37.734602 systemd[1]: Started sshd@341-91.107.204.139:22-60.164.133.37:37002.service - OpenSSH per-connection server daemon (60.164.133.37:37002). May 10 00:26:38.688944 sshd[7753]: Connection closed by authenticating user root 60.164.133.37 port 37002 [preauth] May 10 00:26:38.692665 systemd[1]: sshd@341-91.107.204.139:22-60.164.133.37:37002.service: Deactivated successfully. May 10 00:26:38.888111 systemd[1]: Started sshd@342-91.107.204.139:22-60.164.133.37:38532.service - OpenSSH per-connection server daemon (60.164.133.37:38532). May 10 00:26:39.292580 systemd[1]: Started sshd@343-91.107.204.139:22-147.75.109.163:53328.service - OpenSSH per-connection server daemon (147.75.109.163:53328). May 10 00:26:39.825567 sshd[7758]: Connection closed by authenticating user root 60.164.133.37 port 38532 [preauth] May 10 00:26:39.829648 systemd[1]: sshd@342-91.107.204.139:22-60.164.133.37:38532.service: Deactivated successfully. May 10 00:26:40.030651 systemd[1]: Started sshd@344-91.107.204.139:22-60.164.133.37:39876.service - OpenSSH per-connection server daemon (60.164.133.37:39876). May 10 00:26:40.305086 sshd[7761]: Accepted publickey for core from 147.75.109.163 port 53328 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:40.306975 sshd[7761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:40.312755 systemd-logind[1458]: New session 13 of user core. May 10 00:26:40.319517 systemd[1]: Started session-13.scope - Session 13 of User core. May 10 00:26:40.998107 sshd[7766]: Connection closed by authenticating user root 60.164.133.37 port 39876 [preauth] May 10 00:26:41.001152 systemd[1]: sshd@344-91.107.204.139:22-60.164.133.37:39876.service: Deactivated successfully. May 10 00:26:41.078472 sshd[7761]: pam_unix(sshd:session): session closed for user core May 10 00:26:41.084723 systemd[1]: sshd@343-91.107.204.139:22-147.75.109.163:53328.service: Deactivated successfully. May 10 00:26:41.089225 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:26:41.090860 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. May 10 00:26:41.092399 systemd-logind[1458]: Removed session 13. May 10 00:26:41.207659 systemd[1]: Started sshd@345-91.107.204.139:22-60.164.133.37:41316.service - OpenSSH per-connection server daemon (60.164.133.37:41316). May 10 00:26:41.255539 systemd[1]: Started sshd@346-91.107.204.139:22-147.75.109.163:53344.service - OpenSSH per-connection server daemon (147.75.109.163:53344). May 10 00:26:42.160319 sshd[7781]: Connection closed by authenticating user root 60.164.133.37 port 41316 [preauth] May 10 00:26:42.162501 systemd[1]: sshd@345-91.107.204.139:22-60.164.133.37:41316.service: Deactivated successfully. May 10 00:26:42.250716 sshd[7784]: Accepted publickey for core from 147.75.109.163 port 53344 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:42.252332 sshd[7784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:42.258907 systemd-logind[1458]: New session 14 of user core. May 10 00:26:42.265610 systemd[1]: Started session-14.scope - Session 14 of User core. May 10 00:26:42.368765 systemd[1]: Started sshd@347-91.107.204.139:22-60.164.133.37:42800.service - OpenSSH per-connection server daemon (60.164.133.37:42800). May 10 00:26:43.139851 sshd[7784]: pam_unix(sshd:session): session closed for user core May 10 00:26:43.143507 systemd[1]: sshd@346-91.107.204.139:22-147.75.109.163:53344.service: Deactivated successfully. May 10 00:26:43.147441 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:26:43.150141 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. May 10 00:26:43.151476 systemd-logind[1458]: Removed session 14. May 10 00:26:43.303009 sshd[7790]: Connection closed by authenticating user root 60.164.133.37 port 42800 [preauth] May 10 00:26:43.306440 systemd[1]: sshd@347-91.107.204.139:22-60.164.133.37:42800.service: Deactivated successfully. May 10 00:26:43.327851 systemd[1]: Started sshd@348-91.107.204.139:22-147.75.109.163:53348.service - OpenSSH per-connection server daemon (147.75.109.163:53348). May 10 00:26:43.514686 systemd[1]: Started sshd@349-91.107.204.139:22-60.164.133.37:44180.service - OpenSSH per-connection server daemon (60.164.133.37:44180). May 10 00:26:44.340639 sshd[7802]: Accepted publickey for core from 147.75.109.163 port 53348 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:44.342860 sshd[7802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:44.349664 systemd-logind[1458]: New session 15 of user core. May 10 00:26:44.355664 systemd[1]: Started session-15.scope - Session 15 of User core. May 10 00:26:44.461037 sshd[7805]: Connection closed by authenticating user root 60.164.133.37 port 44180 [preauth] May 10 00:26:44.464448 systemd[1]: sshd@349-91.107.204.139:22-60.164.133.37:44180.service: Deactivated successfully. May 10 00:26:44.778015 systemd[1]: Started sshd@350-91.107.204.139:22-60.164.133.37:45658.service - OpenSSH per-connection server daemon (60.164.133.37:45658). May 10 00:26:46.000686 sshd[7811]: Connection closed by authenticating user root 60.164.133.37 port 45658 [preauth] May 10 00:26:46.003353 systemd[1]: sshd@350-91.107.204.139:22-60.164.133.37:45658.service: Deactivated successfully. May 10 00:26:46.140702 systemd[1]: Started sshd@351-91.107.204.139:22-60.164.133.37:47456.service - OpenSSH per-connection server daemon (60.164.133.37:47456). May 10 00:26:47.039735 sshd[7802]: pam_unix(sshd:session): session closed for user core May 10 00:26:47.044832 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. May 10 00:26:47.045973 systemd[1]: sshd@348-91.107.204.139:22-147.75.109.163:53348.service: Deactivated successfully. May 10 00:26:47.049913 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:26:47.051911 systemd-logind[1458]: Removed session 15. May 10 00:26:47.086324 sshd[7824]: Connection closed by authenticating user root 60.164.133.37 port 47456 [preauth] May 10 00:26:47.090802 systemd[1]: sshd@351-91.107.204.139:22-60.164.133.37:47456.service: Deactivated successfully. May 10 00:26:47.216010 systemd[1]: Started sshd@352-91.107.204.139:22-147.75.109.163:43504.service - OpenSSH per-connection server daemon (147.75.109.163:43504). May 10 00:26:47.299655 systemd[1]: Started sshd@353-91.107.204.139:22-60.164.133.37:48936.service - OpenSSH per-connection server daemon (60.164.133.37:48936). May 10 00:26:48.212582 sshd[7836]: Accepted publickey for core from 147.75.109.163 port 43504 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:48.214766 sshd[7836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:48.221648 systemd-logind[1458]: New session 16 of user core. May 10 00:26:48.227613 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:26:48.252465 sshd[7839]: Connection closed by authenticating user root 60.164.133.37 port 48936 [preauth] May 10 00:26:48.257861 systemd[1]: sshd@353-91.107.204.139:22-60.164.133.37:48936.service: Deactivated successfully. May 10 00:26:48.458685 systemd[1]: Started sshd@354-91.107.204.139:22-60.164.133.37:50282.service - OpenSSH per-connection server daemon (60.164.133.37:50282). May 10 00:26:49.107235 sshd[7836]: pam_unix(sshd:session): session closed for user core May 10 00:26:49.114390 systemd[1]: sshd@352-91.107.204.139:22-147.75.109.163:43504.service: Deactivated successfully. May 10 00:26:49.117078 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:26:49.120005 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. May 10 00:26:49.121073 systemd-logind[1458]: Removed session 16. May 10 00:26:49.291835 systemd[1]: Started sshd@355-91.107.204.139:22-147.75.109.163:43508.service - OpenSSH per-connection server daemon (147.75.109.163:43508). May 10 00:26:49.405607 sshd[7847]: Connection closed by authenticating user root 60.164.133.37 port 50282 [preauth] May 10 00:26:49.409708 systemd[1]: sshd@354-91.107.204.139:22-60.164.133.37:50282.service: Deactivated successfully. May 10 00:26:49.611561 systemd[1]: Started sshd@356-91.107.204.139:22-60.164.133.37:51832.service - OpenSSH per-connection server daemon (60.164.133.37:51832). May 10 00:26:50.304062 sshd[7857]: Accepted publickey for core from 147.75.109.163 port 43508 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:50.306442 sshd[7857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:50.313266 systemd-logind[1458]: New session 17 of user core. May 10 00:26:50.319534 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:26:50.558105 sshd[7862]: Connection closed by authenticating user root 60.164.133.37 port 51832 [preauth] May 10 00:26:50.562472 systemd[1]: sshd@356-91.107.204.139:22-60.164.133.37:51832.service: Deactivated successfully. May 10 00:26:50.758702 systemd[1]: Started sshd@357-91.107.204.139:22-60.164.133.37:53478.service - OpenSSH per-connection server daemon (60.164.133.37:53478). May 10 00:26:51.076929 sshd[7857]: pam_unix(sshd:session): session closed for user core May 10 00:26:51.081568 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. May 10 00:26:51.082213 systemd[1]: sshd@355-91.107.204.139:22-147.75.109.163:43508.service: Deactivated successfully. May 10 00:26:51.084539 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:26:51.085694 systemd-logind[1458]: Removed session 17. May 10 00:26:51.699435 sshd[7868]: Connection closed by authenticating user root 60.164.133.37 port 53478 [preauth] May 10 00:26:51.703515 systemd[1]: sshd@357-91.107.204.139:22-60.164.133.37:53478.service: Deactivated successfully. May 10 00:26:51.907981 systemd[1]: Started sshd@358-91.107.204.139:22-60.164.133.37:54696.service - OpenSSH per-connection server daemon (60.164.133.37:54696). May 10 00:26:52.865021 sshd[7881]: Connection closed by authenticating user root 60.164.133.37 port 54696 [preauth] May 10 00:26:52.867891 systemd[1]: sshd@358-91.107.204.139:22-60.164.133.37:54696.service: Deactivated successfully. May 10 00:26:53.074659 systemd[1]: Started sshd@359-91.107.204.139:22-60.164.133.37:56214.service - OpenSSH per-connection server daemon (60.164.133.37:56214). May 10 00:26:54.029352 sshd[7889]: Connection closed by authenticating user root 60.164.133.37 port 56214 [preauth] May 10 00:26:54.033841 systemd[1]: sshd@359-91.107.204.139:22-60.164.133.37:56214.service: Deactivated successfully. May 10 00:26:54.238807 systemd[1]: Started sshd@360-91.107.204.139:22-60.164.133.37:57692.service - OpenSSH per-connection server daemon (60.164.133.37:57692). May 10 00:26:55.182192 sshd[7894]: Connection closed by authenticating user root 60.164.133.37 port 57692 [preauth] May 10 00:26:55.184066 systemd[1]: sshd@360-91.107.204.139:22-60.164.133.37:57692.service: Deactivated successfully. May 10 00:26:55.388607 systemd[1]: Started sshd@361-91.107.204.139:22-60.164.133.37:59014.service - OpenSSH per-connection server daemon (60.164.133.37:59014). May 10 00:26:56.254621 systemd[1]: Started sshd@362-91.107.204.139:22-147.75.109.163:43514.service - OpenSSH per-connection server daemon (147.75.109.163:43514). May 10 00:26:56.332224 sshd[7899]: Connection closed by authenticating user root 60.164.133.37 port 59014 [preauth] May 10 00:26:56.335809 systemd[1]: sshd@361-91.107.204.139:22-60.164.133.37:59014.service: Deactivated successfully. May 10 00:26:56.533654 systemd[1]: Started sshd@363-91.107.204.139:22-60.164.133.37:60402.service - OpenSSH per-connection server daemon (60.164.133.37:60402). May 10 00:26:57.249673 sshd[7902]: Accepted publickey for core from 147.75.109.163 port 43514 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:26:57.252563 sshd[7902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:26:57.259082 systemd-logind[1458]: New session 18 of user core. May 10 00:26:57.264591 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:26:57.472390 sshd[7907]: Connection closed by authenticating user root 60.164.133.37 port 60402 [preauth] May 10 00:26:57.475218 systemd[1]: sshd@363-91.107.204.139:22-60.164.133.37:60402.service: Deactivated successfully. May 10 00:26:57.679575 systemd[1]: Started sshd@364-91.107.204.139:22-60.164.133.37:33568.service - OpenSSH per-connection server daemon (60.164.133.37:33568). May 10 00:26:58.011013 sshd[7902]: pam_unix(sshd:session): session closed for user core May 10 00:26:58.016466 systemd[1]: sshd@362-91.107.204.139:22-147.75.109.163:43514.service: Deactivated successfully. May 10 00:26:58.019221 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:26:58.021977 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. May 10 00:26:58.023244 systemd-logind[1458]: Removed session 18. May 10 00:26:58.636479 sshd[7932]: Connection closed by authenticating user root 60.164.133.37 port 33568 [preauth] May 10 00:26:58.640699 systemd[1]: sshd@364-91.107.204.139:22-60.164.133.37:33568.service: Deactivated successfully. May 10 00:26:58.948903 systemd[1]: Started sshd@365-91.107.204.139:22-60.164.133.37:35006.service - OpenSSH per-connection server daemon (60.164.133.37:35006). May 10 00:27:00.160431 sshd[7947]: Connection closed by authenticating user root 60.164.133.37 port 35006 [preauth] May 10 00:27:00.162671 systemd[1]: sshd@365-91.107.204.139:22-60.164.133.37:35006.service: Deactivated successfully. May 10 00:27:00.320989 systemd[1]: Started sshd@366-91.107.204.139:22-60.164.133.37:36864.service - OpenSSH per-connection server daemon (60.164.133.37:36864). May 10 00:27:01.279897 sshd[7952]: Connection closed by authenticating user root 60.164.133.37 port 36864 [preauth] May 10 00:27:01.283917 systemd[1]: sshd@366-91.107.204.139:22-60.164.133.37:36864.service: Deactivated successfully. May 10 00:27:01.479745 systemd[1]: Started sshd@367-91.107.204.139:22-60.164.133.37:38438.service - OpenSSH per-connection server daemon (60.164.133.37:38438). May 10 00:27:02.422246 sshd[7969]: Connection closed by authenticating user root 60.164.133.37 port 38438 [preauth] May 10 00:27:02.425471 systemd[1]: sshd@367-91.107.204.139:22-60.164.133.37:38438.service: Deactivated successfully. May 10 00:27:02.632613 systemd[1]: Started sshd@368-91.107.204.139:22-60.164.133.37:39912.service - OpenSSH per-connection server daemon (60.164.133.37:39912). May 10 00:27:03.187663 systemd[1]: Started sshd@369-91.107.204.139:22-147.75.109.163:54980.service - OpenSSH per-connection server daemon (147.75.109.163:54980). May 10 00:27:03.592580 sshd[7974]: Connection closed by authenticating user root 60.164.133.37 port 39912 [preauth] May 10 00:27:03.595682 systemd[1]: sshd@368-91.107.204.139:22-60.164.133.37:39912.service: Deactivated successfully. May 10 00:27:03.785315 systemd[1]: Started sshd@370-91.107.204.139:22-60.164.133.37:41176.service - OpenSSH per-connection server daemon (60.164.133.37:41176). May 10 00:27:04.184985 sshd[7977]: Accepted publickey for core from 147.75.109.163 port 54980 ssh2: RSA SHA256:f5WfDv+qi5DuYrx2bRROpkXs75JJRPKe8+tldd3Tjew May 10 00:27:04.186688 sshd[7977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:27:04.191873 systemd-logind[1458]: New session 19 of user core. May 10 00:27:04.198482 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:27:04.731366 sshd[8004]: Connection closed by authenticating user root 60.164.133.37 port 41176 [preauth] May 10 00:27:04.732815 systemd[1]: sshd@370-91.107.204.139:22-60.164.133.37:41176.service: Deactivated successfully. May 10 00:27:04.932615 systemd[1]: Started sshd@371-91.107.204.139:22-60.164.133.37:42554.service - OpenSSH per-connection server daemon (60.164.133.37:42554). May 10 00:27:04.950464 sshd[7977]: pam_unix(sshd:session): session closed for user core May 10 00:27:04.954498 systemd[1]: sshd@369-91.107.204.139:22-147.75.109.163:54980.service: Deactivated successfully. May 10 00:27:04.959025 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:27:04.962081 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. May 10 00:27:04.963946 systemd-logind[1458]: Removed session 19. May 10 00:27:05.264749 systemd[1]: run-containerd-runc-k8s.io-d8194b1f7c7419c5f1292efd6309c32f21b151a1e1dc88c7bd11ec7c8456f0dc-runc.MxGB0A.mount: Deactivated successfully. May 10 00:27:05.871267 sshd[8022]: Connection closed by authenticating user root 60.164.133.37 port 42554 [preauth] May 10 00:27:05.874596 systemd[1]: sshd@371-91.107.204.139:22-60.164.133.37:42554.service: Deactivated successfully. May 10 00:27:06.079585 systemd[1]: Started sshd@372-91.107.204.139:22-60.164.133.37:44090.service - OpenSSH per-connection server daemon (60.164.133.37:44090). May 10 00:27:07.031511 sshd[8047]: Connection closed by authenticating user root 60.164.133.37 port 44090 [preauth] May 10 00:27:07.036043 systemd[1]: sshd@372-91.107.204.139:22-60.164.133.37:44090.service: Deactivated successfully. May 10 00:27:07.233792 systemd[1]: Started sshd@373-91.107.204.139:22-60.164.133.37:45474.service - OpenSSH per-connection server daemon (60.164.133.37:45474). May 10 00:27:08.179417 sshd[8053]: Connection closed by authenticating user root 60.164.133.37 port 45474 [preauth] May 10 00:27:08.183074 systemd[1]: sshd@373-91.107.204.139:22-60.164.133.37:45474.service: Deactivated successfully. May 10 00:27:08.389708 systemd[1]: Started sshd@374-91.107.204.139:22-60.164.133.37:46852.service - OpenSSH per-connection server daemon (60.164.133.37:46852). May 10 00:27:09.358598 sshd[8058]: Connection closed by authenticating user root 60.164.133.37 port 46852 [preauth] May 10 00:27:09.361935 systemd[1]: sshd@374-91.107.204.139:22-60.164.133.37:46852.service: Deactivated successfully. May 10 00:27:09.562673 systemd[1]: Started sshd@375-91.107.204.139:22-60.164.133.37:48358.service - OpenSSH per-connection server daemon (60.164.133.37:48358). May 10 00:27:10.506347 sshd[8063]: Connection closed by authenticating user root 60.164.133.37 port 48358 [preauth] May 10 00:27:10.510513 systemd[1]: sshd@375-91.107.204.139:22-60.164.133.37:48358.service: Deactivated successfully. May 10 00:27:10.715755 systemd[1]: Started sshd@376-91.107.204.139:22-60.164.133.37:49838.service - OpenSSH per-connection server daemon (60.164.133.37:49838). May 10 00:27:11.661827 sshd[8068]: Connection closed by authenticating user root 60.164.133.37 port 49838 [preauth] May 10 00:27:11.665443 systemd[1]: sshd@376-91.107.204.139:22-60.164.133.37:49838.service: Deactivated successfully. May 10 00:27:11.865566 systemd[1]: Started sshd@377-91.107.204.139:22-60.164.133.37:51266.service - OpenSSH per-connection server daemon (60.164.133.37:51266). May 10 00:27:12.816461 sshd[8073]: Connection closed by authenticating user root 60.164.133.37 port 51266 [preauth] May 10 00:27:12.819692 systemd[1]: sshd@377-91.107.204.139:22-60.164.133.37:51266.service: Deactivated successfully. May 10 00:27:13.018620 systemd[1]: Started sshd@378-91.107.204.139:22-60.164.133.37:52746.service - OpenSSH per-connection server daemon (60.164.133.37:52746). May 10 00:27:13.981014 sshd[8078]: Connection closed by authenticating user root 60.164.133.37 port 52746 [preauth] May 10 00:27:13.983764 systemd[1]: sshd@378-91.107.204.139:22-60.164.133.37:52746.service: Deactivated successfully. May 10 00:27:14.183743 systemd[1]: Started sshd@379-91.107.204.139:22-60.164.133.37:54070.service - OpenSSH per-connection server daemon (60.164.133.37:54070). May 10 00:27:15.136556 sshd[8083]: Connection closed by authenticating user root 60.164.133.37 port 54070 [preauth] May 10 00:27:15.140848 systemd[1]: sshd@379-91.107.204.139:22-60.164.133.37:54070.service: Deactivated successfully. May 10 00:27:15.346753 systemd[1]: Started sshd@380-91.107.204.139:22-60.164.133.37:55392.service - OpenSSH per-connection server daemon (60.164.133.37:55392). May 10 00:27:16.301096 sshd[8088]: Connection closed by authenticating user root 60.164.133.37 port 55392 [preauth] May 10 00:27:16.305041 systemd[1]: sshd@380-91.107.204.139:22-60.164.133.37:55392.service: Deactivated successfully. May 10 00:27:16.503699 systemd[1]: Started sshd@381-91.107.204.139:22-60.164.133.37:56736.service - OpenSSH per-connection server daemon (60.164.133.37:56736). May 10 00:27:17.451828 sshd[8093]: Connection closed by authenticating user root 60.164.133.37 port 56736 [preauth] May 10 00:27:17.454399 systemd[1]: sshd@381-91.107.204.139:22-60.164.133.37:56736.service: Deactivated successfully. May 10 00:27:17.655808 systemd[1]: Started sshd@382-91.107.204.139:22-60.164.133.37:58306.service - OpenSSH per-connection server daemon (60.164.133.37:58306). May 10 00:27:18.596330 sshd[8098]: Connection closed by authenticating user root 60.164.133.37 port 58306 [preauth] May 10 00:27:18.600704 systemd[1]: sshd@382-91.107.204.139:22-60.164.133.37:58306.service: Deactivated successfully. May 10 00:27:18.802814 systemd[1]: Started sshd@383-91.107.204.139:22-60.164.133.37:59682.service - OpenSSH per-connection server daemon (60.164.133.37:59682). May 10 00:27:19.745250 sshd[8105]: Connection closed by authenticating user root 60.164.133.37 port 59682 [preauth] May 10 00:27:19.749163 systemd[1]: sshd@383-91.107.204.139:22-60.164.133.37:59682.service: Deactivated successfully. May 10 00:27:19.952809 systemd[1]: Started sshd@384-91.107.204.139:22-60.164.133.37:60976.service - OpenSSH per-connection server daemon (60.164.133.37:60976). May 10 00:27:20.243523 systemd[1]: cri-containerd-b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a.scope: Deactivated successfully. May 10 00:27:20.243897 systemd[1]: cri-containerd-b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a.scope: Consumed 4.891s CPU time, 22.3M memory peak, 0B memory swap peak. May 10 00:27:20.273340 containerd[1476]: time="2025-05-10T00:27:20.271577549Z" level=info msg="shim disconnected" id=b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a namespace=k8s.io May 10 00:27:20.273340 containerd[1476]: time="2025-05-10T00:27:20.271698429Z" level=warning msg="cleaning up after shim disconnected" id=b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a namespace=k8s.io May 10 00:27:20.274535 containerd[1476]: time="2025-05-10T00:27:20.271719229Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:27:20.274739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a-rootfs.mount: Deactivated successfully. May 10 00:27:20.364677 kubelet[3104]: I0510 00:27:20.364607 3104 scope.go:117] "RemoveContainer" containerID="b3ac6efd8cdcc5ed90c78ead86e3fd837b587425fabb13cb715fac84ae56158a" May 10 00:27:20.368465 containerd[1476]: time="2025-05-10T00:27:20.368413922Z" level=info msg="CreateContainer within sandbox \"4827ba46c955f9a72c216a91f91802fe553c7a915a062ccc22065d7139140410\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 10 00:27:20.384402 containerd[1476]: time="2025-05-10T00:27:20.384280650Z" level=info msg="CreateContainer within sandbox \"4827ba46c955f9a72c216a91f91802fe553c7a915a062ccc22065d7139140410\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3865f92a6b082112d3db9d3d4a911a4081555a3177292983787cf45b81d9d2a8\"" May 10 00:27:20.385338 containerd[1476]: time="2025-05-10T00:27:20.385275733Z" level=info msg="StartContainer for \"3865f92a6b082112d3db9d3d4a911a4081555a3177292983787cf45b81d9d2a8\"" May 10 00:27:20.418767 systemd[1]: Started cri-containerd-3865f92a6b082112d3db9d3d4a911a4081555a3177292983787cf45b81d9d2a8.scope - libcontainer container 3865f92a6b082112d3db9d3d4a911a4081555a3177292983787cf45b81d9d2a8. May 10 00:27:20.460016 containerd[1476]: time="2025-05-10T00:27:20.459968159Z" level=info msg="StartContainer for \"3865f92a6b082112d3db9d3d4a911a4081555a3177292983787cf45b81d9d2a8\" returns successfully" May 10 00:27:20.479529 kubelet[3104]: E0510 00:27:20.479172 3104 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45800->10.0.0.2:2379: read: connection timed out" May 10 00:27:20.487702 systemd[1]: cri-containerd-4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47.scope: Deactivated successfully. May 10 00:27:20.488082 systemd[1]: cri-containerd-4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47.scope: Consumed 3.885s CPU time, 15.2M memory peak, 0B memory swap peak. May 10 00:27:20.518377 containerd[1476]: time="2025-05-10T00:27:20.518218855Z" level=info msg="shim disconnected" id=4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47 namespace=k8s.io May 10 00:27:20.518377 containerd[1476]: time="2025-05-10T00:27:20.518281216Z" level=warning msg="cleaning up after shim disconnected" id=4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47 namespace=k8s.io May 10 00:27:20.518377 containerd[1476]: time="2025-05-10T00:27:20.518317336Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:27:20.539726 systemd[1]: cri-containerd-38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8.scope: Deactivated successfully. May 10 00:27:20.540036 systemd[1]: cri-containerd-38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8.scope: Consumed 5.592s CPU time. May 10 00:27:20.568274 containerd[1476]: time="2025-05-10T00:27:20.568061126Z" level=info msg="shim disconnected" id=38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8 namespace=k8s.io May 10 00:27:20.568274 containerd[1476]: time="2025-05-10T00:27:20.568269847Z" level=warning msg="cleaning up after shim disconnected" id=38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8 namespace=k8s.io May 10 00:27:20.570609 containerd[1476]: time="2025-05-10T00:27:20.568284207Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:27:20.905355 sshd[8110]: Connection closed by authenticating user root 60.164.133.37 port 60976 [preauth] May 10 00:27:20.909632 systemd[1]: sshd@384-91.107.204.139:22-60.164.133.37:60976.service: Deactivated successfully. May 10 00:27:21.107212 systemd[1]: Started sshd@385-91.107.204.139:22-60.164.133.37:34288.service - OpenSSH per-connection server daemon (60.164.133.37:34288). May 10 00:27:21.274441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8-rootfs.mount: Deactivated successfully. May 10 00:27:21.274629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47-rootfs.mount: Deactivated successfully. May 10 00:27:21.370777 kubelet[3104]: I0510 00:27:21.370418 3104 scope.go:117] "RemoveContainer" containerID="38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8" May 10 00:27:21.373512 kubelet[3104]: I0510 00:27:21.372862 3104 scope.go:117] "RemoveContainer" containerID="4c7dee102ac82ff7e321a531b88ded62a93dd6f95481c6b986ffa1679a92af47" May 10 00:27:21.376503 containerd[1476]: time="2025-05-10T00:27:21.376109491Z" level=info msg="CreateContainer within sandbox \"9c3db3c4be8014d45efec8ba0c065dc0f669dd6c716f6c2895576b4bc5f5f47f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 10 00:27:21.379400 containerd[1476]: time="2025-05-10T00:27:21.378757379Z" level=info msg="CreateContainer within sandbox \"a9b5f74382a81a5f64f62a411a5086a6ccfa64d43e4a49322bc3b6c82917e7f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 10 00:27:21.395646 containerd[1476]: time="2025-05-10T00:27:21.395194668Z" level=info msg="CreateContainer within sandbox \"9c3db3c4be8014d45efec8ba0c065dc0f669dd6c716f6c2895576b4bc5f5f47f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69\"" May 10 00:27:21.397347 containerd[1476]: time="2025-05-10T00:27:21.395978151Z" level=info msg="StartContainer for \"67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69\"" May 10 00:27:21.403011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577271241.mount: Deactivated successfully. May 10 00:27:21.418619 containerd[1476]: time="2025-05-10T00:27:21.418192538Z" level=info msg="CreateContainer within sandbox \"a9b5f74382a81a5f64f62a411a5086a6ccfa64d43e4a49322bc3b6c82917e7f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"eb396a6b1f44513c311fe67958f4bf2a75919954634f5ad57811ea45bbf7bca8\"" May 10 00:27:21.418831 containerd[1476]: time="2025-05-10T00:27:21.418791300Z" level=info msg="StartContainer for \"eb396a6b1f44513c311fe67958f4bf2a75919954634f5ad57811ea45bbf7bca8\"" May 10 00:27:21.467170 systemd[1]: Started cri-containerd-67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69.scope - libcontainer container 67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69. May 10 00:27:21.468436 systemd[1]: Started cri-containerd-eb396a6b1f44513c311fe67958f4bf2a75919954634f5ad57811ea45bbf7bca8.scope - libcontainer container eb396a6b1f44513c311fe67958f4bf2a75919954634f5ad57811ea45bbf7bca8. May 10 00:27:21.535013 containerd[1476]: time="2025-05-10T00:27:21.533273446Z" level=info msg="StartContainer for \"eb396a6b1f44513c311fe67958f4bf2a75919954634f5ad57811ea45bbf7bca8\" returns successfully" May 10 00:27:21.543925 containerd[1476]: time="2025-05-10T00:27:21.543476437Z" level=info msg="StartContainer for \"67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69\" returns successfully" May 10 00:27:22.063563 sshd[8226]: Connection closed by authenticating user root 60.164.133.37 port 34288 [preauth] May 10 00:27:22.067975 systemd[1]: sshd@385-91.107.204.139:22-60.164.133.37:34288.service: Deactivated successfully. May 10 00:27:22.267823 systemd[1]: Started sshd@386-91.107.204.139:22-60.164.133.37:35806.service - OpenSSH per-connection server daemon (60.164.133.37:35806). May 10 00:27:22.969516 systemd[1]: cri-containerd-67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69.scope: Deactivated successfully. May 10 00:27:23.003055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69-rootfs.mount: Deactivated successfully. May 10 00:27:23.007123 containerd[1476]: time="2025-05-10T00:27:23.006855940Z" level=info msg="shim disconnected" id=67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69 namespace=k8s.io May 10 00:27:23.007123 containerd[1476]: time="2025-05-10T00:27:23.006909420Z" level=warning msg="cleaning up after shim disconnected" id=67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69 namespace=k8s.io May 10 00:27:23.007123 containerd[1476]: time="2025-05-10T00:27:23.006918060Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:27:23.224241 sshd[8307]: Connection closed by authenticating user root 60.164.133.37 port 35806 [preauth] May 10 00:27:23.226661 systemd[1]: sshd@386-91.107.204.139:22-60.164.133.37:35806.service: Deactivated successfully. May 10 00:27:23.388603 kubelet[3104]: I0510 00:27:23.387702 3104 scope.go:117] "RemoveContainer" containerID="38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8" May 10 00:27:23.388603 kubelet[3104]: I0510 00:27:23.388012 3104 scope.go:117] "RemoveContainer" containerID="67b227df0a23f7a85484b9cf864ee332ffb4218967e14f071552265cd4dd8f69" May 10 00:27:23.388603 kubelet[3104]: E0510 00:27:23.388326 3104 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-797db67f8-kfvq5_tigera-operator(09aa72e2-eb45-406d-83de-db926e8bf680)\"" pod="tigera-operator/tigera-operator-797db67f8-kfvq5" podUID="09aa72e2-eb45-406d-83de-db926e8bf680" May 10 00:27:23.390853 containerd[1476]: time="2025-05-10T00:27:23.390541058Z" level=info msg="RemoveContainer for \"38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8\"" May 10 00:27:23.395143 containerd[1476]: time="2025-05-10T00:27:23.395039272Z" level=info msg="RemoveContainer for \"38dc4a320bcd557880619b1d29227b498a62aff1710f84690d1c455df200e5f8\" returns successfully" May 10 00:27:23.434407 systemd[1]: Started sshd@387-91.107.204.139:22-60.164.133.37:37248.service - OpenSSH per-connection server daemon (60.164.133.37:37248). May 10 00:27:24.403105 sshd[8342]: Connection closed by authenticating user root 60.164.133.37 port 37248 [preauth] May 10 00:27:24.407404 systemd[1]: sshd@387-91.107.204.139:22-60.164.133.37:37248.service: Deactivated successfully. May 10 00:27:24.602750 systemd[1]: Started sshd@388-91.107.204.139:22-60.164.133.37:38714.service - OpenSSH per-connection server daemon (60.164.133.37:38714). May 10 00:27:25.542102 sshd[8347]: Connection closed by authenticating user root 60.164.133.37 port 38714 [preauth] May 10 00:27:25.545838 systemd[1]: sshd@388-91.107.204.139:22-60.164.133.37:38714.service: Deactivated successfully. May 10 00:27:25.597450 kubelet[3104]: E0510 00:27:25.596927 3104 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45616->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-n-2389c948d4.183e02d8b0925aa3 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-n-2389c948d4,UID:371e9d14f51cb6001b93f65b4772119b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-2389c948d4,},FirstTimestamp:2025-05-10 00:27:15.164904099 +0000 UTC m=+342.084103734,LastTimestamp:2025-05-10 00:27:15.164904099 +0000 UTC m=+342.084103734,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-2389c948d4,}" May 10 00:27:25.750889 systemd[1]: Started sshd@389-91.107.204.139:22-60.164.133.37:40204.service - OpenSSH per-connection server daemon (60.164.133.37:40204).