Jan 30 05:25:42.176140 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 05:25:42.176172 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:25:42.176183 kernel: BIOS-provided physical RAM map: Jan 30 05:25:42.176191 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 05:25:42.176198 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 05:25:42.176206 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 05:25:42.176213 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jan 30 05:25:42.176221 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jan 30 05:25:42.176230 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 05:25:42.176236 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 05:25:42.176243 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 05:25:42.176249 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 05:25:42.176255 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 05:25:42.176262 kernel: NX (Execute Disable) protection: active Jan 30 05:25:42.176273 kernel: APIC: Static calls initialized Jan 30 05:25:42.176280 kernel: SMBIOS 3.0.0 present. Jan 30 05:25:42.176289 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 30 05:25:42.176296 kernel: Hypervisor detected: KVM Jan 30 05:25:42.176303 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 05:25:42.176310 kernel: kvm-clock: using sched offset of 3346065008 cycles Jan 30 05:25:42.176317 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 05:25:42.176324 kernel: tsc: Detected 2495.264 MHz processor Jan 30 05:25:42.176332 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 05:25:42.176342 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 05:25:42.176349 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jan 30 05:25:42.176356 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 05:25:42.176363 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 05:25:42.176370 kernel: Using GB pages for direct mapping Jan 30 05:25:42.176377 kernel: ACPI: Early table checksum verification disabled Jan 30 05:25:42.176384 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Jan 30 05:25:42.176393 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.176590 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.176600 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.176607 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jan 30 05:25:42.176614 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.176621 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.176628 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.176635 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:25:42.176642 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Jan 30 05:25:42.176649 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Jan 30 05:25:42.176662 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jan 30 05:25:42.176671 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Jan 30 05:25:42.176679 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Jan 30 05:25:42.176687 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Jan 30 05:25:42.176694 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Jan 30 05:25:42.176701 kernel: No NUMA configuration found Jan 30 05:25:42.176711 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jan 30 05:25:42.176718 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jan 30 05:25:42.176725 kernel: Zone ranges: Jan 30 05:25:42.176732 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 05:25:42.176739 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jan 30 05:25:42.176746 kernel: Normal empty Jan 30 05:25:42.176753 kernel: Movable zone start for each node Jan 30 05:25:42.176760 kernel: Early memory node ranges Jan 30 05:25:42.176767 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 05:25:42.176774 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jan 30 05:25:42.176784 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jan 30 05:25:42.176790 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 05:25:42.176797 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 05:25:42.176804 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 05:25:42.176812 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 05:25:42.176821 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 05:25:42.176829 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 05:25:42.176836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 05:25:42.176843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 05:25:42.176852 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 05:25:42.176860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 05:25:42.176867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 05:25:42.176874 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 05:25:42.176881 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 05:25:42.176908 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 05:25:42.176915 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 05:25:42.176922 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 05:25:42.176929 kernel: Booting paravirtualized kernel on KVM Jan 30 05:25:42.176940 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 05:25:42.176947 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 05:25:42.176954 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 05:25:42.176961 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 05:25:42.176968 kernel: pcpu-alloc: [0] 0 1 Jan 30 05:25:42.176975 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 05:25:42.176992 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:25:42.177001 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 05:25:42.177010 kernel: random: crng init done Jan 30 05:25:42.177017 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 05:25:42.177024 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 05:25:42.177032 kernel: Fallback order for Node 0: 0 Jan 30 05:25:42.177039 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jan 30 05:25:42.177074 kernel: Policy zone: DMA32 Jan 30 05:25:42.177082 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 05:25:42.177089 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 05:25:42.177096 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 05:25:42.177107 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 05:25:42.177114 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 05:25:42.177121 kernel: Dynamic Preempt: voluntary Jan 30 05:25:42.177128 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 05:25:42.177136 kernel: rcu: RCU event tracing is enabled. Jan 30 05:25:42.177144 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 05:25:42.177151 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 05:25:42.177158 kernel: Rude variant of Tasks RCU enabled. Jan 30 05:25:42.177165 kernel: Tracing variant of Tasks RCU enabled. Jan 30 05:25:42.177172 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 05:25:42.177182 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 05:25:42.177189 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 05:25:42.177196 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 05:25:42.177202 kernel: Console: colour VGA+ 80x25 Jan 30 05:25:42.177210 kernel: printk: console [tty0] enabled Jan 30 05:25:42.177217 kernel: printk: console [ttyS0] enabled Jan 30 05:25:42.177224 kernel: ACPI: Core revision 20230628 Jan 30 05:25:42.177231 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 05:25:42.177238 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 05:25:42.177248 kernel: x2apic enabled Jan 30 05:25:42.177255 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 05:25:42.177262 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 05:25:42.177269 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 05:25:42.177276 kernel: Calibrating delay loop (skipped) preset value.. 4990.52 BogoMIPS (lpj=2495264) Jan 30 05:25:42.177283 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 05:25:42.177291 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 05:25:42.177298 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 05:25:42.177315 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 05:25:42.177323 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 05:25:42.177330 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 05:25:42.177342 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 05:25:42.177350 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 05:25:42.177358 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 05:25:42.177365 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 05:25:42.177373 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 05:25:42.177380 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 05:25:42.177391 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 05:25:42.177398 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 05:25:42.177406 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 05:25:42.177414 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 05:25:42.177421 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 05:25:42.177429 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 05:25:42.177436 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 05:25:42.177446 kernel: Freeing SMP alternatives memory: 32K Jan 30 05:25:42.177454 kernel: pid_max: default: 32768 minimum: 301 Jan 30 05:25:42.177461 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 05:25:42.177469 kernel: landlock: Up and running. Jan 30 05:25:42.177476 kernel: SELinux: Initializing. Jan 30 05:25:42.177484 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.177491 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.177499 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 05:25:42.177507 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:25:42.177517 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:25:42.177524 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:25:42.177532 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 05:25:42.177539 kernel: ... version: 0 Jan 30 05:25:42.177546 kernel: ... bit width: 48 Jan 30 05:25:42.177554 kernel: ... generic registers: 6 Jan 30 05:25:42.177561 kernel: ... value mask: 0000ffffffffffff Jan 30 05:25:42.177569 kernel: ... max period: 00007fffffffffff Jan 30 05:25:42.177576 kernel: ... fixed-purpose events: 0 Jan 30 05:25:42.177586 kernel: ... event mask: 000000000000003f Jan 30 05:25:42.177593 kernel: signal: max sigframe size: 1776 Jan 30 05:25:42.177601 kernel: rcu: Hierarchical SRCU implementation. Jan 30 05:25:42.177608 kernel: rcu: Max phase no-delay instances is 400. Jan 30 05:25:42.177616 kernel: smp: Bringing up secondary CPUs ... Jan 30 05:25:42.177623 kernel: smpboot: x86: Booting SMP configuration: Jan 30 05:25:42.177630 kernel: .... node #0, CPUs: #1 Jan 30 05:25:42.177637 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 05:25:42.177645 kernel: smpboot: Max logical packages: 1 Jan 30 05:25:42.177652 kernel: smpboot: Total of 2 processors activated (9981.05 BogoMIPS) Jan 30 05:25:42.177662 kernel: devtmpfs: initialized Jan 30 05:25:42.177669 kernel: x86/mm: Memory block size: 128MB Jan 30 05:25:42.177677 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 05:25:42.177684 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.177692 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 05:25:42.177699 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 05:25:42.177707 kernel: audit: initializing netlink subsys (disabled) Jan 30 05:25:42.177715 kernel: audit: type=2000 audit(1738214740.370:1): state=initialized audit_enabled=0 res=1 Jan 30 05:25:42.177722 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 05:25:42.177732 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 05:25:42.177740 kernel: cpuidle: using governor menu Jan 30 05:25:42.177747 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 05:25:42.177754 kernel: dca service started, version 1.12.1 Jan 30 05:25:42.177762 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 05:25:42.177769 kernel: PCI: Using configuration type 1 for base access Jan 30 05:25:42.177777 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 05:25:42.177784 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 05:25:42.177792 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 05:25:42.177802 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 05:25:42.177809 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 05:25:42.177816 kernel: ACPI: Added _OSI(Module Device) Jan 30 05:25:42.177824 kernel: ACPI: Added _OSI(Processor Device) Jan 30 05:25:42.177832 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 05:25:42.177839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 05:25:42.177847 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 05:25:42.177854 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 05:25:42.177862 kernel: ACPI: Interpreter enabled Jan 30 05:25:42.177872 kernel: ACPI: PM: (supports S0 S5) Jan 30 05:25:42.177879 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 05:25:42.177898 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 05:25:42.177912 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 05:25:42.177925 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 05:25:42.177935 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 05:25:42.178184 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 05:25:42.178347 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 05:25:42.178499 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 05:25:42.178513 kernel: PCI host bridge to bus 0000:00 Jan 30 05:25:42.178679 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 05:25:42.178819 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 05:25:42.178975 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 05:25:42.179139 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jan 30 05:25:42.179308 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 05:25:42.179452 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 05:25:42.179590 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 05:25:42.179756 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 05:25:42.179911 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jan 30 05:25:42.180055 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jan 30 05:25:42.180177 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jan 30 05:25:42.180302 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jan 30 05:25:42.180423 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jan 30 05:25:42.180542 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 05:25:42.180675 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.180794 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jan 30 05:25:42.180966 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.181103 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jan 30 05:25:42.181232 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.181357 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jan 30 05:25:42.181489 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.181610 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jan 30 05:25:42.181742 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.181867 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jan 30 05:25:42.182029 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.182152 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jan 30 05:25:42.182287 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.182438 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jan 30 05:25:42.182578 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.182701 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jan 30 05:25:42.182873 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 05:25:42.183041 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jan 30 05:25:42.183200 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 05:25:42.183320 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 05:25:42.183456 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 05:25:42.183576 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jan 30 05:25:42.183729 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jan 30 05:25:42.183885 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 05:25:42.184075 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 05:25:42.184216 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 05:25:42.184343 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jan 30 05:25:42.184468 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 05:25:42.184653 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jan 30 05:25:42.184904 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 05:25:42.185042 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 05:25:42.185166 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:25:42.185304 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 05:25:42.185483 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jan 30 05:25:42.185631 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 05:25:42.185757 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 05:25:42.185876 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:25:42.186059 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 05:25:42.186191 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jan 30 05:25:42.186319 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jan 30 05:25:42.186442 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 05:25:42.186562 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 05:25:42.186688 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:25:42.186828 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 05:25:42.187075 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 05:25:42.187207 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 05:25:42.187330 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 05:25:42.187448 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:25:42.187582 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 05:25:42.187713 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jan 30 05:25:42.187836 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jan 30 05:25:42.187971 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 05:25:42.188109 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 05:25:42.188228 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:25:42.188362 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 05:25:42.188487 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jan 30 05:25:42.188630 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jan 30 05:25:42.188756 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 05:25:42.188873 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 05:25:42.189084 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:25:42.189095 kernel: acpiphp: Slot [0] registered Jan 30 05:25:42.189236 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 05:25:42.189359 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jan 30 05:25:42.189482 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jan 30 05:25:42.189610 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jan 30 05:25:42.189728 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 05:25:42.189844 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 05:25:42.189975 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:25:42.189994 kernel: acpiphp: Slot [0-2] registered Jan 30 05:25:42.190112 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 05:25:42.190229 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 05:25:42.190347 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:25:42.190361 kernel: acpiphp: Slot [0-3] registered Jan 30 05:25:42.190477 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 05:25:42.190594 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 05:25:42.190727 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:25:42.190739 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 05:25:42.190746 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 05:25:42.190754 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 05:25:42.190762 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 05:25:42.190770 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 05:25:42.190781 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 05:25:42.190788 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 05:25:42.190796 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 05:25:42.190803 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 05:25:42.190811 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 05:25:42.190818 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 05:25:42.190825 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 05:25:42.190833 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 05:25:42.190840 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 05:25:42.190850 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 05:25:42.190858 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 05:25:42.190865 kernel: iommu: Default domain type: Translated Jan 30 05:25:42.190873 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 05:25:42.190880 kernel: PCI: Using ACPI for IRQ routing Jan 30 05:25:42.190902 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 05:25:42.190909 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 05:25:42.190917 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jan 30 05:25:42.191049 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 05:25:42.191172 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 05:25:42.191289 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 05:25:42.191299 kernel: vgaarb: loaded Jan 30 05:25:42.191306 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 05:25:42.191314 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 05:25:42.191322 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 05:25:42.191329 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 05:25:42.191337 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 05:25:42.191345 kernel: pnp: PnP ACPI init Jan 30 05:25:42.191478 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 05:25:42.191488 kernel: pnp: PnP ACPI: found 5 devices Jan 30 05:25:42.191496 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 05:25:42.191504 kernel: NET: Registered PF_INET protocol family Jan 30 05:25:42.191512 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 05:25:42.191519 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 05:25:42.191527 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 05:25:42.191535 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 05:25:42.191545 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 05:25:42.191553 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 05:25:42.191561 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.191569 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:25:42.191576 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 05:25:42.191583 kernel: NET: Registered PF_XDP protocol family Jan 30 05:25:42.191702 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 05:25:42.191820 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 05:25:42.191957 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 05:25:42.192088 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 05:25:42.192218 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 05:25:42.192337 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 05:25:42.192454 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 05:25:42.192571 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 05:25:42.192688 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:25:42.192809 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 05:25:42.193016 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 05:25:42.193139 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:25:42.193269 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 05:25:42.193389 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 05:25:42.193508 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:25:42.193631 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 05:25:42.193754 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 05:25:42.193903 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:25:42.194041 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 05:25:42.194171 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 05:25:42.194298 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:25:42.194417 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 05:25:42.194534 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 05:25:42.194651 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:25:42.194767 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 05:25:42.194885 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 30 05:25:42.195144 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 05:25:42.195263 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:25:42.195380 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 05:25:42.195495 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 30 05:25:42.195611 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jan 30 05:25:42.195727 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:25:42.195849 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 05:25:42.195998 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 30 05:25:42.196119 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 05:25:42.196250 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:25:42.196413 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 05:25:42.196529 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 05:25:42.196637 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 05:25:42.196790 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jan 30 05:25:42.196974 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 05:25:42.197094 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 05:25:42.197219 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 05:25:42.197360 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jan 30 05:25:42.197489 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 05:25:42.197602 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 05:25:42.197726 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 05:25:42.197840 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 05:25:42.198019 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 05:25:42.198155 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 05:25:42.198284 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 05:25:42.198397 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 05:25:42.198549 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 05:25:42.198663 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 05:25:42.198808 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 30 05:25:42.198945 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 05:25:42.199097 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 05:25:42.199223 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 30 05:25:42.199336 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jan 30 05:25:42.199472 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 05:25:42.199619 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 30 05:25:42.199734 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 05:25:42.199847 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 05:25:42.199862 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 05:25:42.199870 kernel: PCI: CLS 0 bytes, default 64 Jan 30 05:25:42.199878 kernel: Initialise system trusted keyrings Jan 30 05:25:42.199902 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 05:25:42.199910 kernel: Key type asymmetric registered Jan 30 05:25:42.199918 kernel: Asymmetric key parser 'x509' registered Jan 30 05:25:42.199926 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 05:25:42.199934 kernel: io scheduler mq-deadline registered Jan 30 05:25:42.199942 kernel: io scheduler kyber registered Jan 30 05:25:42.199982 kernel: io scheduler bfq registered Jan 30 05:25:42.200124 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 05:25:42.200245 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 05:25:42.200377 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 05:25:42.200509 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 05:25:42.200674 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 05:25:42.200828 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 05:25:42.200973 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 05:25:42.201107 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 05:25:42.201231 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 05:25:42.201372 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 05:25:42.201544 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 05:25:42.201701 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 05:25:42.201855 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 05:25:42.202038 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 05:25:42.202165 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 05:25:42.202307 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 05:25:42.202331 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 05:25:42.202460 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 30 05:25:42.202579 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 30 05:25:42.202630 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 05:25:42.202672 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 30 05:25:42.202682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 05:25:42.202692 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 05:25:42.202702 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 05:25:42.202712 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 05:25:42.202727 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 05:25:42.202738 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 05:25:42.203061 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 05:25:42.203180 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 05:25:42.203307 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T05:25:41 UTC (1738214741) Jan 30 05:25:42.203449 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 05:25:42.203461 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 05:25:42.203476 kernel: NET: Registered PF_INET6 protocol family Jan 30 05:25:42.203484 kernel: Segment Routing with IPv6 Jan 30 05:25:42.203492 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 05:25:42.203500 kernel: NET: Registered PF_PACKET protocol family Jan 30 05:25:42.203508 kernel: Key type dns_resolver registered Jan 30 05:25:42.203516 kernel: IPI shorthand broadcast: enabled Jan 30 05:25:42.203524 kernel: sched_clock: Marking stable (1478018061, 151166880)->(1643317102, -14132161) Jan 30 05:25:42.203532 kernel: registered taskstats version 1 Jan 30 05:25:42.203540 kernel: Loading compiled-in X.509 certificates Jan 30 05:25:42.203548 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 05:25:42.203559 kernel: Key type .fscrypt registered Jan 30 05:25:42.203567 kernel: Key type fscrypt-provisioning registered Jan 30 05:25:42.203575 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 05:25:42.203583 kernel: ima: Allocated hash algorithm: sha1 Jan 30 05:25:42.203591 kernel: ima: No architecture policies found Jan 30 05:25:42.203599 kernel: clk: Disabling unused clocks Jan 30 05:25:42.203607 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 05:25:42.203615 kernel: Write protecting the kernel read-only data: 36864k Jan 30 05:25:42.203626 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 05:25:42.203634 kernel: Run /init as init process Jan 30 05:25:42.203642 kernel: with arguments: Jan 30 05:25:42.203650 kernel: /init Jan 30 05:25:42.203658 kernel: with environment: Jan 30 05:25:42.203665 kernel: HOME=/ Jan 30 05:25:42.203673 kernel: TERM=linux Jan 30 05:25:42.203681 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 05:25:42.203691 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:25:42.203705 systemd[1]: Detected virtualization kvm. Jan 30 05:25:42.203713 systemd[1]: Detected architecture x86-64. Jan 30 05:25:42.203722 systemd[1]: Running in initrd. Jan 30 05:25:42.203730 systemd[1]: No hostname configured, using default hostname. Jan 30 05:25:42.203738 systemd[1]: Hostname set to . Jan 30 05:25:42.203746 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:25:42.203754 systemd[1]: Queued start job for default target initrd.target. Jan 30 05:25:42.203777 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:25:42.203786 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:25:42.203795 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 05:25:42.203803 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:25:42.203811 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 05:25:42.203820 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 05:25:42.203838 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 05:25:42.203849 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 05:25:42.203858 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:25:42.203866 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:25:42.203875 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:25:42.203883 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:25:42.203904 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:25:42.203912 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:25:42.203921 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:25:42.203929 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:25:42.203941 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:25:42.203952 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:25:42.203963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:25:42.203974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:25:42.204006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:25:42.204017 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:25:42.204028 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 05:25:42.204039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:25:42.204055 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 05:25:42.204064 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 05:25:42.204072 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:25:42.204081 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:25:42.204091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:42.204102 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 05:25:42.204113 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:25:42.204124 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 05:25:42.204164 systemd-journald[187]: Collecting audit messages is disabled. Jan 30 05:25:42.204190 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:25:42.204200 systemd-journald[187]: Journal started Jan 30 05:25:42.204219 systemd-journald[187]: Runtime Journal (/run/log/journal/e7131cd6b3f443758d14365ea9164141) is 4.8M, max 38.4M, 33.6M free. Jan 30 05:25:42.157649 systemd-modules-load[189]: Inserted module 'overlay' Jan 30 05:25:42.233820 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 05:25:42.233863 kernel: Bridge firewalling registered Jan 30 05:25:42.233874 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:25:42.208474 systemd-modules-load[189]: Inserted module 'br_netfilter' Jan 30 05:25:42.244423 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:25:42.245343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:42.255197 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:25:42.269156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:25:42.271729 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:25:42.276940 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:25:42.288136 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:25:42.289946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:25:42.294654 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:25:42.306169 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 05:25:42.307097 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:25:42.310070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:25:42.318556 dracut-cmdline[220]: dracut-dracut-053 Jan 30 05:25:42.324320 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:25:42.326445 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:25:42.359206 systemd-resolved[227]: Positive Trust Anchors: Jan 30 05:25:42.360167 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:25:42.360203 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:25:42.366380 systemd-resolved[227]: Defaulting to hostname 'linux'. Jan 30 05:25:42.368046 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:25:42.369071 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:25:42.410973 kernel: SCSI subsystem initialized Jan 30 05:25:42.421926 kernel: Loading iSCSI transport class v2.0-870. Jan 30 05:25:42.433951 kernel: iscsi: registered transport (tcp) Jan 30 05:25:42.457086 kernel: iscsi: registered transport (qla4xxx) Jan 30 05:25:42.457188 kernel: QLogic iSCSI HBA Driver Jan 30 05:25:42.567272 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 05:25:42.576213 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 05:25:42.644327 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 05:25:42.644494 kernel: device-mapper: uevent: version 1.0.3 Jan 30 05:25:42.646429 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 05:25:42.715978 kernel: raid6: avx2x4 gen() 13791 MB/s Jan 30 05:25:42.733975 kernel: raid6: avx2x2 gen() 17412 MB/s Jan 30 05:25:42.751194 kernel: raid6: avx2x1 gen() 19535 MB/s Jan 30 05:25:42.751240 kernel: raid6: using algorithm avx2x1 gen() 19535 MB/s Jan 30 05:25:42.770026 kernel: raid6: .... xor() 14983 MB/s, rmw enabled Jan 30 05:25:42.770097 kernel: raid6: using avx2x2 recovery algorithm Jan 30 05:25:42.792992 kernel: xor: automatically using best checksumming function avx Jan 30 05:25:42.952997 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 05:25:42.971318 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:25:42.986209 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:25:42.998956 systemd-udevd[406]: Using default interface naming scheme 'v255'. Jan 30 05:25:43.004201 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:25:43.012116 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 05:25:43.028588 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 30 05:25:43.072351 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:25:43.079100 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:25:43.176529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:25:43.183394 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 05:25:43.225404 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 05:25:43.228753 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:25:43.232043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:25:43.234220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:25:43.242196 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 05:25:43.263319 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:25:43.289918 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 05:25:43.300638 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 05:25:43.300681 kernel: AES CTR mode by8 optimization enabled Jan 30 05:25:43.305934 kernel: ACPI: bus type USB registered Jan 30 05:25:43.316915 kernel: usbcore: registered new interface driver usbfs Jan 30 05:25:43.334054 kernel: scsi host0: Virtio SCSI HBA Jan 30 05:25:43.340162 kernel: usbcore: registered new interface driver hub Jan 30 05:25:43.340220 kernel: usbcore: registered new device driver usb Jan 30 05:25:43.346978 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 05:25:43.355691 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:25:43.357266 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:25:43.358261 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:25:43.390741 kernel: libata version 3.00 loaded. Jan 30 05:25:43.358777 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:25:43.360082 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:43.361165 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:43.390991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:43.425935 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 05:25:43.464056 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 05:25:43.464097 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 05:25:43.464260 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 05:25:43.464397 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 05:25:43.464564 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 05:25:43.464706 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 05:25:43.465001 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 05:25:43.465155 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 05:25:43.465294 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 05:25:43.465435 kernel: hub 1-0:1.0: USB hub found Jan 30 05:25:43.465614 kernel: hub 1-0:1.0: 4 ports detected Jan 30 05:25:43.465771 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 05:25:43.466108 kernel: hub 2-0:1.0: USB hub found Jan 30 05:25:43.466299 kernel: hub 2-0:1.0: 4 ports detected Jan 30 05:25:43.466456 kernel: scsi host1: ahci Jan 30 05:25:43.466616 kernel: scsi host2: ahci Jan 30 05:25:43.466773 kernel: scsi host3: ahci Jan 30 05:25:43.466943 kernel: scsi host4: ahci Jan 30 05:25:43.467116 kernel: scsi host5: ahci Jan 30 05:25:43.467262 kernel: scsi host6: ahci Jan 30 05:25:43.467404 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Jan 30 05:25:43.467415 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Jan 30 05:25:43.467425 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Jan 30 05:25:43.467435 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Jan 30 05:25:43.467445 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Jan 30 05:25:43.467459 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Jan 30 05:25:43.513491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:43.520128 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:25:43.544443 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:25:43.691995 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 05:25:43.778116 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.778283 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.778331 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.781365 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.783946 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 05:25:43.786947 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 05:25:43.790294 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 05:25:43.793992 kernel: ata1.00: applying bridge limits Jan 30 05:25:43.796184 kernel: ata1.00: configured for UDMA/100 Jan 30 05:25:43.801942 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 05:25:43.857195 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 30 05:25:43.890379 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 05:25:43.890675 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 05:25:43.890927 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 30 05:25:43.891173 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 05:25:43.891413 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 05:25:43.891430 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 05:25:43.891445 kernel: GPT:17805311 != 80003071 Jan 30 05:25:43.891459 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 05:25:43.891474 kernel: GPT:17805311 != 80003071 Jan 30 05:25:43.891487 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 05:25:43.891502 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:25:43.891516 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 05:25:43.900538 kernel: usbcore: registered new interface driver usbhid Jan 30 05:25:43.900578 kernel: usbhid: USB HID core driver Jan 30 05:25:43.904426 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 05:25:43.921596 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 05:25:43.921621 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 05:25:43.921639 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 05:25:43.921982 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 30 05:25:43.951939 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (463) Jan 30 05:25:43.957917 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (454) Jan 30 05:25:43.972759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 05:25:43.981105 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 05:25:43.987369 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 05:25:43.988199 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 05:25:43.995637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 05:25:44.004069 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 05:25:44.014024 disk-uuid[578]: Primary Header is updated. Jan 30 05:25:44.014024 disk-uuid[578]: Secondary Entries is updated. Jan 30 05:25:44.014024 disk-uuid[578]: Secondary Header is updated. Jan 30 05:25:44.025939 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:25:44.040956 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:25:44.048921 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:25:45.052977 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 05:25:45.055012 disk-uuid[579]: The operation has completed successfully. Jan 30 05:25:45.157722 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 05:25:45.157934 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 05:25:45.176070 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 05:25:45.198278 sh[598]: Success Jan 30 05:25:45.222987 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 05:25:45.321650 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 05:25:45.347288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 05:25:45.357504 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 05:25:45.381818 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 05:25:45.382006 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:25:45.385556 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 05:25:45.389467 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 05:25:45.392430 kernel: BTRFS info (device dm-0): using free space tree Jan 30 05:25:45.414994 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 05:25:45.419243 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 05:25:45.422365 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 05:25:45.430343 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 05:25:45.445372 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 05:25:45.482863 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:45.482988 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:25:45.485927 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:25:45.495720 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:25:45.495756 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:25:45.512980 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 05:25:45.517909 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:45.526121 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 05:25:45.535196 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 05:25:45.587355 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:25:45.601290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:25:45.631999 systemd-networkd[779]: lo: Link UP Jan 30 05:25:45.632776 systemd-networkd[779]: lo: Gained carrier Jan 30 05:25:45.636410 systemd-networkd[779]: Enumeration completed Jan 30 05:25:45.636541 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:25:45.638427 systemd[1]: Reached target network.target - Network. Jan 30 05:25:45.638707 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:45.638713 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:25:45.641643 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:45.641647 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:25:45.642934 systemd-networkd[779]: eth0: Link UP Jan 30 05:25:45.642938 systemd-networkd[779]: eth0: Gained carrier Jan 30 05:25:45.642946 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:45.649321 systemd-networkd[779]: eth1: Link UP Jan 30 05:25:45.649332 systemd-networkd[779]: eth1: Gained carrier Jan 30 05:25:45.649347 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:45.652365 ignition[723]: Ignition 2.19.0 Jan 30 05:25:45.652373 ignition[723]: Stage: fetch-offline Jan 30 05:25:45.652517 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:45.652528 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:45.652732 ignition[723]: parsed url from cmdline: "" Jan 30 05:25:45.652739 ignition[723]: no config URL provided Jan 30 05:25:45.652746 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:25:45.652758 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:25:45.652765 ignition[723]: failed to fetch config: resource requires networking Jan 30 05:25:45.657725 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:25:45.653092 ignition[723]: Ignition finished successfully Jan 30 05:25:45.662153 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 05:25:45.682585 ignition[787]: Ignition 2.19.0 Jan 30 05:25:45.682603 ignition[787]: Stage: fetch Jan 30 05:25:45.682934 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:45.682948 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:45.683085 ignition[787]: parsed url from cmdline: "" Jan 30 05:25:45.683093 ignition[787]: no config URL provided Jan 30 05:25:45.683100 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:25:45.683110 ignition[787]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:25:45.683166 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 05:25:45.683433 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 05:25:45.711028 systemd-networkd[779]: eth0: DHCPv4 address 128.140.113.241/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 05:25:45.752054 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 05:25:45.883759 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 05:25:45.892600 ignition[787]: GET result: OK Jan 30 05:25:45.892753 ignition[787]: parsing config with SHA512: c01dbeec64ae7b2521c67f8363b935c495f2b0120ce748d0e0d2bbc650d1bb87be43827bc09631ec7f9b7c2d070c3d4be2115b90d745805884103eb52be3dc5c Jan 30 05:25:45.903172 unknown[787]: fetched base config from "system" Jan 30 05:25:45.903200 unknown[787]: fetched base config from "system" Jan 30 05:25:45.904197 ignition[787]: fetch: fetch complete Jan 30 05:25:45.903214 unknown[787]: fetched user config from "hetzner" Jan 30 05:25:45.904213 ignition[787]: fetch: fetch passed Jan 30 05:25:45.904341 ignition[787]: Ignition finished successfully Jan 30 05:25:45.912298 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 05:25:45.918253 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 05:25:45.938128 ignition[795]: Ignition 2.19.0 Jan 30 05:25:45.938142 ignition[795]: Stage: kargs Jan 30 05:25:45.938319 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:45.938333 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:45.944865 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 05:25:45.939277 ignition[795]: kargs: kargs passed Jan 30 05:25:45.939327 ignition[795]: Ignition finished successfully Jan 30 05:25:45.966471 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 05:25:45.986556 ignition[802]: Ignition 2.19.0 Jan 30 05:25:45.986581 ignition[802]: Stage: disks Jan 30 05:25:45.987027 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:45.992135 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 05:25:45.987053 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:45.994502 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 05:25:45.989104 ignition[802]: disks: disks passed Jan 30 05:25:45.996064 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:25:45.989205 ignition[802]: Ignition finished successfully Jan 30 05:25:45.998144 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:25:45.999844 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:25:46.001447 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:25:46.010235 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 05:25:46.037606 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 05:25:46.041151 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 05:25:46.047171 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 05:25:46.136005 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 05:25:46.139176 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 05:25:46.142273 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 05:25:46.149986 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:25:46.158205 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 05:25:46.164103 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 05:25:46.164695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 05:25:46.164726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:25:46.170922 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 05:25:46.179214 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 05:25:46.186636 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (818) Jan 30 05:25:46.186695 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:46.190029 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:25:46.190078 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:25:46.197247 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:25:46.197314 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:25:46.208529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:25:46.257852 coreos-metadata[820]: Jan 30 05:25:46.257 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 05:25:46.260468 coreos-metadata[820]: Jan 30 05:25:46.259 INFO Fetch successful Jan 30 05:25:46.261933 coreos-metadata[820]: Jan 30 05:25:46.261 INFO wrote hostname ci-4081-3-0-d-6ba27b8de2 to /sysroot/etc/hostname Jan 30 05:25:46.264186 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:25:46.268638 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 05:25:46.275132 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Jan 30 05:25:46.281391 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 05:25:46.287015 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 05:25:46.405777 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 05:25:46.411981 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 05:25:46.414068 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 05:25:46.427593 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 05:25:46.431948 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:46.454720 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 05:25:46.459914 ignition[935]: INFO : Ignition 2.19.0 Jan 30 05:25:46.459914 ignition[935]: INFO : Stage: mount Jan 30 05:25:46.459914 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:46.459914 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:46.464244 ignition[935]: INFO : mount: mount passed Jan 30 05:25:46.464244 ignition[935]: INFO : Ignition finished successfully Jan 30 05:25:46.463794 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 05:25:46.471021 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 05:25:46.484121 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:25:46.503932 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (947) Jan 30 05:25:46.509571 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:25:46.509620 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:25:46.509644 kernel: BTRFS info (device sda6): using free space tree Jan 30 05:25:46.519023 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 05:25:46.519082 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 05:25:46.525084 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:25:46.567046 ignition[964]: INFO : Ignition 2.19.0 Jan 30 05:25:46.567046 ignition[964]: INFO : Stage: files Jan 30 05:25:46.568767 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:46.568767 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:46.570858 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 30 05:25:46.571848 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 05:25:46.571848 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 05:25:46.576423 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 05:25:46.577726 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 05:25:46.577726 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 05:25:46.577116 unknown[964]: wrote ssh authorized keys file for user: core Jan 30 05:25:46.581044 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 05:25:46.581044 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 05:25:46.697544 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 05:25:46.913579 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 05:25:46.913579 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 05:25:46.918537 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 05:25:47.429506 systemd-networkd[779]: eth1: Gained IPv6LL Jan 30 05:25:47.502548 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 05:25:47.621462 systemd-networkd[779]: eth0: Gained IPv6LL Jan 30 05:25:47.799448 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 05:25:47.799448 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 30 05:25:47.804878 ignition[964]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 05:25:47.804878 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:25:47.804878 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:25:47.804878 ignition[964]: INFO : files: files passed Jan 30 05:25:47.804878 ignition[964]: INFO : Ignition finished successfully Jan 30 05:25:47.807273 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 05:25:47.817092 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 05:25:47.821040 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 05:25:47.829323 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 05:25:47.829586 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 05:25:47.842944 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:25:47.842944 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:25:47.845394 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:25:47.847603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:25:47.848831 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 05:25:47.854050 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 05:25:47.892348 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 05:25:47.892506 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 05:25:47.894141 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 05:25:47.895000 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 05:25:47.896195 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 05:25:47.897758 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 05:25:47.916530 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:25:47.923165 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 05:25:47.936858 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:25:47.939092 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:25:47.940262 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 05:25:47.942290 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 05:25:47.942472 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:25:47.944696 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 05:25:47.945767 systemd[1]: Stopped target basic.target - Basic System. Jan 30 05:25:47.947471 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 05:25:47.949012 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:25:47.950517 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 05:25:47.952384 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 05:25:47.954207 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:25:47.956189 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 05:25:47.957842 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 05:25:47.959473 systemd[1]: Stopped target swap.target - Swaps. Jan 30 05:25:47.960919 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 05:25:47.961078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:25:47.962754 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:25:47.963806 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:25:47.965234 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 05:25:47.965395 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:25:47.966930 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 05:25:47.967091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 05:25:47.969243 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 05:25:47.969414 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:25:47.970447 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 05:25:47.970660 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 05:25:47.971860 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 05:25:47.972035 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:25:47.981148 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 05:25:47.982810 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 05:25:47.983939 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:25:47.991263 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 05:25:47.992152 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 05:25:47.992375 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:25:47.998387 ignition[1017]: INFO : Ignition 2.19.0 Jan 30 05:25:47.998387 ignition[1017]: INFO : Stage: umount Jan 30 05:25:47.998387 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:25:47.998387 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 05:25:47.997247 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 05:25:48.005493 ignition[1017]: INFO : umount: umount passed Jan 30 05:25:48.005493 ignition[1017]: INFO : Ignition finished successfully Jan 30 05:25:47.997445 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:25:48.007434 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 05:25:48.007559 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 05:25:48.008841 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 05:25:48.010604 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 05:25:48.011637 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 05:25:48.012199 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 05:25:48.013203 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 05:25:48.013249 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 05:25:48.014260 systemd[1]: Stopped target network.target - Network. Jan 30 05:25:48.015376 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 05:25:48.015427 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:25:48.017242 systemd[1]: Stopped target paths.target - Path Units. Jan 30 05:25:48.017639 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 05:25:48.021388 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:25:48.023180 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 05:25:48.024922 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 05:25:48.026223 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 05:25:48.026280 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:25:48.027515 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 05:25:48.027556 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:25:48.029570 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 05:25:48.029628 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 05:25:48.030234 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 05:25:48.030284 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 05:25:48.030924 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 05:25:48.031542 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 05:25:48.035058 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 05:25:48.035653 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 05:25:48.035751 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 05:25:48.039107 systemd-networkd[779]: eth1: DHCPv6 lease lost Jan 30 05:25:48.041716 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 05:25:48.041830 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 05:25:48.041950 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 30 05:25:48.049193 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 05:25:48.049335 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 05:25:48.056616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 05:25:48.056676 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:25:48.069990 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 05:25:48.074267 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 05:25:48.074372 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:25:48.075362 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:25:48.075440 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:25:48.076627 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 05:25:48.076717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 05:25:48.078242 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 05:25:48.078324 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:25:48.082393 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:25:48.094565 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 05:25:48.094691 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 05:25:48.101314 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 05:25:48.101477 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:25:48.104782 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 05:25:48.105246 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 05:25:48.106405 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 05:25:48.106473 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 05:25:48.107159 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 05:25:48.107197 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:25:48.108152 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 05:25:48.108200 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:25:48.109591 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 05:25:48.109635 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 05:25:48.110648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:25:48.110697 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:25:48.111694 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 05:25:48.111744 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 05:25:48.118315 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 05:25:48.118805 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 05:25:48.118857 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:25:48.119403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:25:48.119448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:48.125841 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 05:25:48.125990 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 05:25:48.127569 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 05:25:48.133018 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 05:25:48.140741 systemd[1]: Switching root. Jan 30 05:25:48.172319 systemd-journald[187]: Journal stopped Jan 30 05:25:49.550173 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 30 05:25:49.550242 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 05:25:49.550257 kernel: SELinux: policy capability open_perms=1 Jan 30 05:25:49.550267 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 05:25:49.550282 kernel: SELinux: policy capability always_check_network=0 Jan 30 05:25:49.550300 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 05:25:49.550314 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 05:25:49.550325 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 05:25:49.550337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 05:25:49.550352 kernel: audit: type=1403 audit(1738214748.380:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 05:25:49.550365 systemd[1]: Successfully loaded SELinux policy in 49.854ms. Jan 30 05:25:49.550386 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.505ms. Jan 30 05:25:49.550400 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:25:49.550412 systemd[1]: Detected virtualization kvm. Jan 30 05:25:49.550424 systemd[1]: Detected architecture x86-64. Jan 30 05:25:49.550435 systemd[1]: Detected first boot. Jan 30 05:25:49.550447 systemd[1]: Hostname set to . Jan 30 05:25:49.550458 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:25:49.550471 zram_generator::config[1059]: No configuration found. Jan 30 05:25:49.550493 systemd[1]: Populated /etc with preset unit settings. Jan 30 05:25:49.550519 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 05:25:49.550537 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 05:25:49.550554 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 05:25:49.550571 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 05:25:49.550588 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 05:25:49.550605 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 05:25:49.550619 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 05:25:49.550631 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 05:25:49.550648 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 05:25:49.550662 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 05:25:49.550674 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 05:25:49.550685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:25:49.550697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:25:49.550709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 05:25:49.550720 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 05:25:49.550732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 05:25:49.550743 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:25:49.550757 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 05:25:49.550769 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:25:49.550780 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 05:25:49.550792 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 05:25:49.550803 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 05:25:49.550819 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 05:25:49.550833 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:25:49.550844 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:25:49.550856 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:25:49.550868 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:25:49.550879 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 05:25:49.550906 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 05:25:49.550918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:25:49.550930 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:25:49.550941 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:25:49.550953 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 05:25:49.550968 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 05:25:49.550980 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 05:25:49.550996 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 05:25:49.551010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:49.551022 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 05:25:49.551039 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 05:25:49.551065 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 05:25:49.551083 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 05:25:49.551099 systemd[1]: Reached target machines.target - Containers. Jan 30 05:25:49.551112 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 05:25:49.551126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:49.551137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:25:49.551149 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 05:25:49.551161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:25:49.551195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:25:49.551207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:25:49.551218 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 05:25:49.551230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:25:49.551242 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:25:49.551254 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 05:25:49.551265 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 05:25:49.551277 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 05:25:49.551290 kernel: fuse: init (API version 7.39) Jan 30 05:25:49.551302 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 05:25:49.551314 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:25:49.551326 kernel: loop: module loaded Jan 30 05:25:49.551337 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:25:49.551349 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 05:25:49.551360 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 05:25:49.551372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:25:49.551384 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 05:25:49.551399 systemd[1]: Stopped verity-setup.service. Jan 30 05:25:49.551410 kernel: ACPI: bus type drm_connector registered Jan 30 05:25:49.551422 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:49.551434 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 05:25:49.551477 systemd-journald[1139]: Collecting audit messages is disabled. Jan 30 05:25:49.551520 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 05:25:49.551538 systemd-journald[1139]: Journal started Jan 30 05:25:49.551571 systemd-journald[1139]: Runtime Journal (/run/log/journal/e7131cd6b3f443758d14365ea9164141) is 4.8M, max 38.4M, 33.6M free. Jan 30 05:25:49.128873 systemd[1]: Queued start job for default target multi-user.target. Jan 30 05:25:49.147930 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 05:25:49.148818 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 05:25:49.554976 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:25:49.556722 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 05:25:49.557495 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 05:25:49.558190 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 05:25:49.559082 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 05:25:49.560250 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 05:25:49.561201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:25:49.562177 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 05:25:49.562403 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 05:25:49.563332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:25:49.563573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:25:49.564658 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:25:49.564877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:25:49.565770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:25:49.566078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:25:49.567023 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 05:25:49.567262 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 05:25:49.568289 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:25:49.568525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:25:49.569529 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:25:49.570390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 05:25:49.571351 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 05:25:49.590069 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 05:25:49.596963 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 05:25:49.602999 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 05:25:49.603649 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:25:49.604971 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:25:49.608975 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 05:25:49.616570 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 05:25:49.620032 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 05:25:49.622115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:49.630053 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 05:25:49.634350 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 05:25:49.635436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:25:49.642059 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 05:25:49.642652 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:25:49.653052 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:25:49.660097 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 05:25:49.663598 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 05:25:49.667275 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 05:25:49.668883 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 05:25:49.671271 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 05:25:49.678447 systemd-journald[1139]: Time spent on flushing to /var/log/journal/e7131cd6b3f443758d14365ea9164141 is 43.486ms for 1136 entries. Jan 30 05:25:49.678447 systemd-journald[1139]: System Journal (/var/log/journal/e7131cd6b3f443758d14365ea9164141) is 8.0M, max 584.8M, 576.8M free. Jan 30 05:25:49.753196 kernel: loop0: detected capacity change from 0 to 8 Jan 30 05:25:49.753219 systemd-journald[1139]: Received client request to flush runtime journal. Jan 30 05:25:49.753243 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 05:25:49.753256 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 05:25:49.705428 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 05:25:49.709203 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 05:25:49.719499 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 05:25:49.738029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:25:49.751291 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 05:25:49.758261 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 05:25:49.760338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:25:49.778300 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 05:25:49.794446 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 05:25:49.795065 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 05:25:49.812913 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 05:25:49.821079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:25:49.824928 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 05:25:49.861409 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jan 30 05:25:49.861428 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jan 30 05:25:49.871211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:25:49.877922 kernel: loop3: detected capacity change from 0 to 218376 Jan 30 05:25:49.938532 kernel: loop4: detected capacity change from 0 to 8 Jan 30 05:25:49.944944 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 05:25:49.969914 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 05:25:50.002410 kernel: loop7: detected capacity change from 0 to 218376 Jan 30 05:25:50.024937 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 05:25:50.026982 (sd-merge)[1205]: Merged extensions into '/usr'. Jan 30 05:25:50.032927 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 05:25:50.033086 systemd[1]: Reloading... Jan 30 05:25:50.167921 zram_generator::config[1237]: No configuration found. Jan 30 05:25:50.302386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:25:50.305371 ldconfig[1174]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 05:25:50.359171 systemd[1]: Reloading finished in 325 ms. Jan 30 05:25:50.383547 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 05:25:50.384681 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 05:25:50.396804 systemd[1]: Starting ensure-sysext.service... Jan 30 05:25:50.409149 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:25:50.414002 systemd[1]: Reloading requested from client PID 1274 ('systemctl') (unit ensure-sysext.service)... Jan 30 05:25:50.414016 systemd[1]: Reloading... Jan 30 05:25:50.437693 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 05:25:50.438099 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 05:25:50.439090 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 05:25:50.439393 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jan 30 05:25:50.439466 systemd-tmpfiles[1275]: ACLs are not supported, ignoring. Jan 30 05:25:50.443336 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:25:50.443514 systemd-tmpfiles[1275]: Skipping /boot Jan 30 05:25:50.457867 systemd-tmpfiles[1275]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:25:50.458046 systemd-tmpfiles[1275]: Skipping /boot Jan 30 05:25:50.501925 zram_generator::config[1305]: No configuration found. Jan 30 05:25:50.618064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:25:50.672449 systemd[1]: Reloading finished in 258 ms. Jan 30 05:25:50.692250 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 05:25:50.693224 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:25:50.707036 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:25:50.712097 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 05:25:50.716078 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 05:25:50.724036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:25:50.729023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:25:50.736031 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 05:25:50.750245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:50.750417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:50.757098 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:25:50.765125 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:25:50.770096 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:25:50.770777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:50.770876 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:50.777715 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 05:25:50.781859 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 05:25:50.785615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:50.785834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:50.787207 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:50.787294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:50.791361 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:50.791560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:50.797375 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:25:50.798305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:50.798574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:50.803650 systemd[1]: Finished ensure-sysext.service. Jan 30 05:25:50.807568 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jan 30 05:25:50.817083 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 05:25:50.818100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:25:50.818289 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:25:50.819230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:25:50.820154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:25:50.823650 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:25:50.823886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:25:50.825379 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:25:50.825553 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:25:50.833120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:25:50.834004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:25:50.853840 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 05:25:50.875057 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 05:25:50.886225 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 05:25:50.900500 augenrules[1388]: No rules Jan 30 05:25:50.903749 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:25:50.905529 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:25:50.917125 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:25:50.918562 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 05:25:50.936162 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 05:25:50.937282 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:25:50.999336 systemd-resolved[1351]: Positive Trust Anchors: Jan 30 05:25:50.999680 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:25:50.999762 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:25:51.003974 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 05:25:51.005476 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 05:25:51.015549 systemd-resolved[1351]: Using system hostname 'ci-4081-3-0-d-6ba27b8de2'. Jan 30 05:25:51.021180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:25:51.021930 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:25:51.037566 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 05:25:51.041342 systemd-networkd[1397]: lo: Link UP Jan 30 05:25:51.041355 systemd-networkd[1397]: lo: Gained carrier Jan 30 05:25:51.071076 systemd-networkd[1397]: Enumeration completed Jan 30 05:25:51.071371 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:25:51.072007 systemd[1]: Reached target network.target - Network. Jan 30 05:25:51.081151 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 05:25:51.111601 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:51.111734 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:25:51.114760 systemd-networkd[1397]: eth0: Link UP Jan 30 05:25:51.114769 systemd-networkd[1397]: eth0: Gained carrier Jan 30 05:25:51.114784 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:51.124084 systemd-networkd[1397]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:51.124094 systemd-networkd[1397]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:25:51.125768 systemd-networkd[1397]: eth1: Link UP Jan 30 05:25:51.125775 systemd-networkd[1397]: eth1: Gained carrier Jan 30 05:25:51.125787 systemd-networkd[1397]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:25:51.130144 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1407) Jan 30 05:25:51.146927 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 05:25:51.156945 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 05:25:51.170929 kernel: ACPI: button: Power Button [PWRF] Jan 30 05:25:51.176028 systemd-networkd[1397]: eth0: DHCPv4 address 128.140.113.241/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 05:25:51.177981 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Jan 30 05:25:51.187056 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 05:25:51.194140 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 05:25:51.213079 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 05:25:51.213120 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:51.213242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:25:51.221062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:25:51.227990 systemd-networkd[1397]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 05:25:51.228295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:25:51.230107 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Jan 30 05:25:51.231250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:25:51.232123 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:25:51.232153 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:25:51.232168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:25:51.232956 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 05:25:51.238328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:25:51.239021 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:25:51.248562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:25:51.250146 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:25:51.255180 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:25:51.264258 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:25:51.265003 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 05:25:51.264788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:25:51.265607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:25:51.272919 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 05:25:51.281126 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 05:25:51.281321 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 05:25:51.281453 kernel: EDAC MC: Ver: 3.0.0 Jan 30 05:25:51.313418 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 30 05:25:51.313502 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 30 05:25:51.315191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:51.320985 kernel: Console: switching to colour dummy device 80x25 Jan 30 05:25:51.321121 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 05:25:51.321139 kernel: [drm] features: -context_init Jan 30 05:25:51.321155 kernel: [drm] number of scanouts: 1 Jan 30 05:25:51.321170 kernel: [drm] number of cap sets: 0 Jan 30 05:25:51.325928 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 05:25:51.335910 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 05:25:51.335965 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 05:25:51.332776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:25:51.333029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:51.341909 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 05:25:51.353064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:25:51.445555 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:25:51.466490 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 05:25:51.474189 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 05:25:51.511685 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:25:51.560291 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 05:25:51.561069 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:25:51.561333 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:25:51.561708 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 05:25:51.562429 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 05:25:51.564839 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 05:25:51.566610 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 05:25:51.566885 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 05:25:51.568089 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 05:25:51.568178 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:25:51.568401 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:25:51.570339 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 05:25:51.575483 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 05:25:51.588967 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 05:25:51.596239 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 05:25:51.599174 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 05:25:51.600411 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:25:51.600568 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:25:51.600796 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:25:51.600851 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:25:51.614203 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 05:25:51.623264 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 05:25:51.631543 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:25:51.645163 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 05:25:51.654284 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 05:25:51.666456 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 05:25:51.670380 jq[1462]: false Jan 30 05:25:51.671231 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 05:25:51.677084 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 05:25:51.685982 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 05:25:51.692035 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 05:25:51.698030 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 05:25:51.708045 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 05:25:51.720041 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 05:25:51.720368 dbus-daemon[1461]: [system] SELinux support is enabled Jan 30 05:25:51.722378 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 05:25:51.722930 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 05:25:51.725759 extend-filesystems[1463]: Found loop4 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found loop5 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found loop6 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found loop7 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found sda Jan 30 05:25:51.732849 extend-filesystems[1463]: Found sda1 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found sda2 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found sda3 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found usr Jan 30 05:25:51.732849 extend-filesystems[1463]: Found sda4 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found sda6 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found sda7 Jan 30 05:25:51.732849 extend-filesystems[1463]: Found sda9 Jan 30 05:25:51.732849 extend-filesystems[1463]: Checking size of /dev/sda9 Jan 30 05:25:51.731088 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 05:25:51.806803 coreos-metadata[1460]: Jan 30 05:25:51.788 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 05:25:51.815672 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 05:25:51.815742 extend-filesystems[1463]: Resized partition /dev/sda9 Jan 30 05:25:51.754018 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 05:25:51.823735 coreos-metadata[1460]: Jan 30 05:25:51.816 INFO Fetch successful Jan 30 05:25:51.823735 coreos-metadata[1460]: Jan 30 05:25:51.816 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 05:25:51.823787 extend-filesystems[1488]: resize2fs 1.47.1 (20-May-2024) Jan 30 05:25:51.776277 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 05:25:51.848288 coreos-metadata[1460]: Jan 30 05:25:51.830 INFO Fetch successful Jan 30 05:25:51.793730 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 05:25:51.812433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 05:25:51.812670 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 05:25:51.813054 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 05:25:51.848763 jq[1482]: true Jan 30 05:25:51.813276 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 05:25:51.826172 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 05:25:51.826397 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 05:25:51.875811 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 05:25:51.875861 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 05:25:51.877846 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 05:25:51.877863 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 05:25:51.885975 update_engine[1474]: I20250130 05:25:51.881867 1474 main.cc:92] Flatcar Update Engine starting Jan 30 05:25:51.886691 systemd-logind[1473]: New seat seat0. Jan 30 05:25:51.892362 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 05:25:51.893630 jq[1493]: true Jan 30 05:25:51.893901 systemd-logind[1473]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 05:25:51.894968 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 05:25:51.896638 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 05:25:51.900398 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 05:25:51.905283 update_engine[1474]: I20250130 05:25:51.905009 1474 update_check_scheduler.cc:74] Next update check in 2m14s Jan 30 05:25:51.906598 systemd[1]: Started update-engine.service - Update Engine. Jan 30 05:25:51.917740 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 05:25:51.929000 tar[1492]: linux-amd64/LICENSE Jan 30 05:25:51.930146 tar[1492]: linux-amd64/helm Jan 30 05:25:52.034940 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1409) Jan 30 05:25:52.039812 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 05:25:52.042585 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 05:25:52.049158 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 05:25:52.061752 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 05:25:52.068247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 05:25:52.097991 extend-filesystems[1488]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 05:25:52.097991 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 05:25:52.097991 extend-filesystems[1488]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 05:25:52.118599 extend-filesystems[1463]: Resized filesystem in /dev/sda9 Jan 30 05:25:52.118599 extend-filesystems[1463]: Found sr0 Jan 30 05:25:52.125772 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:25:52.101276 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 05:25:52.101497 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 05:25:52.118545 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 05:25:52.131040 locksmithd[1511]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 05:25:52.138678 systemd[1]: Starting sshkeys.service... Jan 30 05:25:52.140570 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 05:25:52.140795 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 05:25:52.155122 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 05:25:52.165437 systemd-networkd[1397]: eth1: Gained IPv6LL Jan 30 05:25:52.167365 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Jan 30 05:25:52.169318 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 05:25:52.173662 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 05:25:52.178504 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 05:25:52.190496 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 05:25:52.199575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:25:52.212150 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 05:25:52.225291 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 05:25:52.226840 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 05:25:52.242327 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 05:25:52.253690 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 05:25:52.268130 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 05:25:52.330194 coreos-metadata[1567]: Jan 30 05:25:52.329 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 05:25:52.334138 coreos-metadata[1567]: Jan 30 05:25:52.334 INFO Fetch successful Jan 30 05:25:52.340216 unknown[1567]: wrote ssh authorized keys file for user: core Jan 30 05:25:52.372076 containerd[1502]: time="2025-01-30T05:25:52.371999867Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 05:25:52.375981 update-ssh-keys[1577]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:25:52.375456 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 05:25:52.381557 systemd[1]: Finished sshkeys.service. Jan 30 05:25:52.428042 containerd[1502]: time="2025-01-30T05:25:52.427773643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:52.432560 containerd[1502]: time="2025-01-30T05:25:52.432169939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:52.432560 containerd[1502]: time="2025-01-30T05:25:52.432195478Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 05:25:52.432560 containerd[1502]: time="2025-01-30T05:25:52.432209734Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 05:25:52.432560 containerd[1502]: time="2025-01-30T05:25:52.432407559Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 05:25:52.432560 containerd[1502]: time="2025-01-30T05:25:52.432422297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:52.432560 containerd[1502]: time="2025-01-30T05:25:52.432484265Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:52.432560 containerd[1502]: time="2025-01-30T05:25:52.432495145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:52.434021 containerd[1502]: time="2025-01-30T05:25:52.433865009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:52.434021 containerd[1502]: time="2025-01-30T05:25:52.433932257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:52.434021 containerd[1502]: time="2025-01-30T05:25:52.433952335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:52.434021 containerd[1502]: time="2025-01-30T05:25:52.433964949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:52.434211 containerd[1502]: time="2025-01-30T05:25:52.434115243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:52.434906 containerd[1502]: time="2025-01-30T05:25:52.434374294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:25:52.434906 containerd[1502]: time="2025-01-30T05:25:52.434514840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:25:52.434906 containerd[1502]: time="2025-01-30T05:25:52.434529227Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 05:25:52.434906 containerd[1502]: time="2025-01-30T05:25:52.434637042Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 05:25:52.434906 containerd[1502]: time="2025-01-30T05:25:52.434704840Z" level=info msg="metadata content store policy set" policy=shared Jan 30 05:25:52.444056 containerd[1502]: time="2025-01-30T05:25:52.444014703Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 05:25:52.444110 containerd[1502]: time="2025-01-30T05:25:52.444079446Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 05:25:52.444393 containerd[1502]: time="2025-01-30T05:25:52.444364586Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 05:25:52.444421 containerd[1502]: time="2025-01-30T05:25:52.444405373Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 05:25:52.444444 containerd[1502]: time="2025-01-30T05:25:52.444421554Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 05:25:52.444920 containerd[1502]: time="2025-01-30T05:25:52.444576147Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 05:25:52.444920 containerd[1502]: time="2025-01-30T05:25:52.444784441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 05:25:52.444965 containerd[1502]: time="2025-01-30T05:25:52.444927091Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 05:25:52.444965 containerd[1502]: time="2025-01-30T05:25:52.444941899Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 05:25:52.444965 containerd[1502]: time="2025-01-30T05:25:52.444956046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 05:25:52.445016 containerd[1502]: time="2025-01-30T05:25:52.444968800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 05:25:52.445016 containerd[1502]: time="2025-01-30T05:25:52.444981985Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 05:25:52.445016 containerd[1502]: time="2025-01-30T05:25:52.444992845Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 05:25:52.445016 containerd[1502]: time="2025-01-30T05:25:52.445006321Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 05:25:52.445091 containerd[1502]: time="2025-01-30T05:25:52.445020117Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 05:25:52.445091 containerd[1502]: time="2025-01-30T05:25:52.445033502Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 05:25:52.445091 containerd[1502]: time="2025-01-30T05:25:52.445045285Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 05:25:52.445091 containerd[1502]: time="2025-01-30T05:25:52.445055965Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 05:25:52.445091 containerd[1502]: time="2025-01-30T05:25:52.445073589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445091 containerd[1502]: time="2025-01-30T05:25:52.445087114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445099317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445112402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445124855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445137390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445152107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445164751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445176333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445188656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445199357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445205 containerd[1502]: time="2025-01-30T05:25:52.445211290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445223352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445259100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445282775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445295910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445316108Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445363026Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445386331Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445397752Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 05:25:52.445406 containerd[1502]: time="2025-01-30T05:25:52.445409244Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 05:25:52.445569 containerd[1502]: time="2025-01-30T05:25:52.445419193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.445569 containerd[1502]: time="2025-01-30T05:25:52.445432318Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 05:25:52.445569 containerd[1502]: time="2025-01-30T05:25:52.445448839Z" level=info msg="NRI interface is disabled by configuration." Jan 30 05:25:52.445569 containerd[1502]: time="2025-01-30T05:25:52.445462715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 05:25:52.446323 containerd[1502]: time="2025-01-30T05:25:52.445708942Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 05:25:52.446323 containerd[1502]: time="2025-01-30T05:25:52.445765288Z" level=info msg="Connect containerd service" Jan 30 05:25:52.446323 containerd[1502]: time="2025-01-30T05:25:52.445800866Z" level=info msg="using legacy CRI server" Jan 30 05:25:52.446323 containerd[1502]: time="2025-01-30T05:25:52.445807088Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 05:25:52.446517 containerd[1502]: time="2025-01-30T05:25:52.446344556Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 05:25:52.458662 containerd[1502]: time="2025-01-30T05:25:52.458158281Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:25:52.459728 containerd[1502]: time="2025-01-30T05:25:52.459696995Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 05:25:52.459795 containerd[1502]: time="2025-01-30T05:25:52.459771246Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 05:25:52.459849 containerd[1502]: time="2025-01-30T05:25:52.459800150Z" level=info msg="Start subscribing containerd event" Jan 30 05:25:52.459901 containerd[1502]: time="2025-01-30T05:25:52.459866386Z" level=info msg="Start recovering state" Jan 30 05:25:52.459992 containerd[1502]: time="2025-01-30T05:25:52.459970754Z" level=info msg="Start event monitor" Jan 30 05:25:52.460019 containerd[1502]: time="2025-01-30T05:25:52.459991914Z" level=info msg="Start snapshots syncer" Jan 30 05:25:52.460053 containerd[1502]: time="2025-01-30T05:25:52.460017052Z" level=info msg="Start cni network conf syncer for default" Jan 30 05:25:52.460053 containerd[1502]: time="2025-01-30T05:25:52.460026690Z" level=info msg="Start streaming server" Jan 30 05:25:52.462204 containerd[1502]: time="2025-01-30T05:25:52.460106882Z" level=info msg="containerd successfully booted in 0.090797s" Jan 30 05:25:52.460218 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 05:25:52.682822 tar[1492]: linux-amd64/README.md Jan 30 05:25:52.699230 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 05:25:52.805255 systemd-networkd[1397]: eth0: Gained IPv6LL Jan 30 05:25:52.805863 systemd-timesyncd[1367]: Network configuration changed, trying to establish connection. Jan 30 05:25:53.667251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:25:53.667472 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:25:53.671462 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 05:25:53.683457 systemd[1]: Startup finished in 1.708s (kernel) + 6.546s (initrd) + 5.350s (userspace) = 13.605s. Jan 30 05:25:54.590782 kubelet[1591]: E0130 05:25:54.590657 1591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:25:54.598647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:25:54.599141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:25:54.599829 systemd[1]: kubelet.service: Consumed 1.514s CPU time. Jan 30 05:26:04.849729 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 05:26:04.861273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:05.068681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:05.073185 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:05.111781 kubelet[1610]: E0130 05:26:05.111614 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:05.116877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:05.117451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:15.368362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 05:26:15.375240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:15.582311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:15.602400 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:15.670553 kubelet[1625]: E0130 05:26:15.670392 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:15.674581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:15.675044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:23.045794 systemd-timesyncd[1367]: Contacted time server 62.75.236.38:123 (2.flatcar.pool.ntp.org). Jan 30 05:26:23.045957 systemd-timesyncd[1367]: Initial clock synchronization to Thu 2025-01-30 05:26:23.278679 UTC. Jan 30 05:26:25.785668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 05:26:25.792209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:26.052246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:26.054572 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:26.100439 kubelet[1640]: E0130 05:26:26.100343 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:26.109598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:26.109846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:36.285699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 05:26:36.293236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:36.538252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:36.539807 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:36.622060 kubelet[1655]: E0130 05:26:36.621890 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:36.629834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:36.630140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:37.124223 update_engine[1474]: I20250130 05:26:37.124028 1474 update_attempter.cc:509] Updating boot flags... Jan 30 05:26:37.217055 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1671) Jan 30 05:26:37.303940 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1673) Jan 30 05:26:37.356935 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1673) Jan 30 05:26:46.785127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 05:26:46.792246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:47.033836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:47.044198 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:47.088033 kubelet[1691]: E0130 05:26:47.087876 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:47.094708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:47.095189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:26:57.284762 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 05:26:57.292104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:26:57.539311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:26:57.540433 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:26:57.590721 kubelet[1706]: E0130 05:26:57.590618 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:26:57.598549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:26:57.598977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:07.785355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 05:27:07.793235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:08.032073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:08.044403 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:08.093772 kubelet[1721]: E0130 05:27:08.093678 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:08.100449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:08.100778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:18.285187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 05:27:18.292235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:18.521549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:18.533220 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:18.581841 kubelet[1736]: E0130 05:27:18.581576 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:18.588394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:18.588817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:28.785562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 05:27:28.798549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:29.007325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:29.011244 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:29.046255 kubelet[1751]: E0130 05:27:29.046075 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:29.053140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:29.053351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:39.285579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 05:27:39.293986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:39.526293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:39.526759 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:39.567095 kubelet[1766]: E0130 05:27:39.566928 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:39.574659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:39.574857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:45.635733 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 05:27:45.644388 systemd[1]: Started sshd@0-128.140.113.241:22-139.178.89.65:50560.service - OpenSSH per-connection server daemon (139.178.89.65:50560). Jan 30 05:27:46.668187 sshd[1774]: Accepted publickey for core from 139.178.89.65 port 50560 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:46.671455 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:46.680100 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 05:27:46.689374 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 05:27:46.691677 systemd-logind[1473]: New session 1 of user core. Jan 30 05:27:46.701994 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 05:27:46.708152 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 05:27:46.721469 (systemd)[1778]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 05:27:46.881846 systemd[1778]: Queued start job for default target default.target. Jan 30 05:27:46.889400 systemd[1778]: Created slice app.slice - User Application Slice. Jan 30 05:27:46.889427 systemd[1778]: Reached target paths.target - Paths. Jan 30 05:27:46.889440 systemd[1778]: Reached target timers.target - Timers. Jan 30 05:27:46.891175 systemd[1778]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 05:27:46.921492 systemd[1778]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 05:27:46.921751 systemd[1778]: Reached target sockets.target - Sockets. Jan 30 05:27:46.921782 systemd[1778]: Reached target basic.target - Basic System. Jan 30 05:27:46.921856 systemd[1778]: Reached target default.target - Main User Target. Jan 30 05:27:46.921987 systemd[1778]: Startup finished in 186ms. Jan 30 05:27:46.922047 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 05:27:46.934245 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 05:27:47.643542 systemd[1]: Started sshd@1-128.140.113.241:22-139.178.89.65:50564.service - OpenSSH per-connection server daemon (139.178.89.65:50564). Jan 30 05:27:48.637707 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 50564 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:48.641160 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:48.651991 systemd-logind[1473]: New session 2 of user core. Jan 30 05:27:48.655159 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 05:27:49.326673 sshd[1789]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:49.337263 systemd[1]: sshd@1-128.140.113.241:22-139.178.89.65:50564.service: Deactivated successfully. Jan 30 05:27:49.343774 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 05:27:49.345450 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Jan 30 05:27:49.347626 systemd-logind[1473]: Removed session 2. Jan 30 05:27:49.507550 systemd[1]: Started sshd@2-128.140.113.241:22-139.178.89.65:50574.service - OpenSSH per-connection server daemon (139.178.89.65:50574). Jan 30 05:27:49.785491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 05:27:49.797312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:27:49.985786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:27:49.997166 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:27:50.055762 kubelet[1806]: E0130 05:27:50.055460 1806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:27:50.061214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:27:50.061563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:27:50.510193 sshd[1796]: Accepted publickey for core from 139.178.89.65 port 50574 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:50.513394 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:50.522517 systemd-logind[1473]: New session 3 of user core. Jan 30 05:27:50.532265 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 05:27:51.191842 sshd[1796]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:51.197316 systemd[1]: sshd@2-128.140.113.241:22-139.178.89.65:50574.service: Deactivated successfully. Jan 30 05:27:51.201653 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 05:27:51.204745 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Jan 30 05:27:51.207222 systemd-logind[1473]: Removed session 3. Jan 30 05:27:51.366334 systemd[1]: Started sshd@3-128.140.113.241:22-139.178.89.65:33086.service - OpenSSH per-connection server daemon (139.178.89.65:33086). Jan 30 05:27:52.367605 sshd[1818]: Accepted publickey for core from 139.178.89.65 port 33086 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:52.371139 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:52.381560 systemd-logind[1473]: New session 4 of user core. Jan 30 05:27:52.388267 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 05:27:53.060100 sshd[1818]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:53.068467 systemd[1]: sshd@3-128.140.113.241:22-139.178.89.65:33086.service: Deactivated successfully. Jan 30 05:27:53.073875 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 05:27:53.078994 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Jan 30 05:27:53.082486 systemd-logind[1473]: Removed session 4. Jan 30 05:27:53.241360 systemd[1]: Started sshd@4-128.140.113.241:22-139.178.89.65:33102.service - OpenSSH per-connection server daemon (139.178.89.65:33102). Jan 30 05:27:54.257474 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 33102 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:54.261140 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:54.271268 systemd-logind[1473]: New session 5 of user core. Jan 30 05:27:54.279194 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 05:27:54.807135 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 05:27:54.807867 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:27:54.833566 sudo[1828]: pam_unix(sudo:session): session closed for user root Jan 30 05:27:54.997006 sshd[1825]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:55.001610 systemd[1]: sshd@4-128.140.113.241:22-139.178.89.65:33102.service: Deactivated successfully. Jan 30 05:27:55.004582 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 05:27:55.006972 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Jan 30 05:27:55.008950 systemd-logind[1473]: Removed session 5. Jan 30 05:27:55.178037 systemd[1]: Started sshd@5-128.140.113.241:22-139.178.89.65:33116.service - OpenSSH per-connection server daemon (139.178.89.65:33116). Jan 30 05:27:56.187690 sshd[1833]: Accepted publickey for core from 139.178.89.65 port 33116 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:56.190979 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:56.200128 systemd-logind[1473]: New session 6 of user core. Jan 30 05:27:56.210249 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 05:27:56.725475 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 05:27:56.726218 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:27:56.735042 sudo[1837]: pam_unix(sudo:session): session closed for user root Jan 30 05:27:56.749774 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 05:27:56.750733 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:27:56.778457 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 05:27:56.784651 auditctl[1840]: No rules Jan 30 05:27:56.788175 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 05:27:56.788825 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 05:27:56.796589 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:27:56.865032 augenrules[1858]: No rules Jan 30 05:27:56.866557 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:27:56.869230 sudo[1836]: pam_unix(sudo:session): session closed for user root Jan 30 05:27:57.032099 sshd[1833]: pam_unix(sshd:session): session closed for user core Jan 30 05:27:57.039547 systemd[1]: sshd@5-128.140.113.241:22-139.178.89.65:33116.service: Deactivated successfully. Jan 30 05:27:57.043642 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 05:27:57.046961 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Jan 30 05:27:57.048824 systemd-logind[1473]: Removed session 6. Jan 30 05:27:57.207364 systemd[1]: Started sshd@6-128.140.113.241:22-139.178.89.65:33118.service - OpenSSH per-connection server daemon (139.178.89.65:33118). Jan 30 05:27:58.210358 sshd[1866]: Accepted publickey for core from 139.178.89.65 port 33118 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:27:58.213618 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:27:58.223361 systemd-logind[1473]: New session 7 of user core. Jan 30 05:27:58.230148 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 05:27:58.745571 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 05:27:58.746736 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:27:59.298337 (dockerd)[1885]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 05:27:59.298346 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 05:27:59.728419 dockerd[1885]: time="2025-01-30T05:27:59.727981555Z" level=info msg="Starting up" Jan 30 05:27:59.925688 dockerd[1885]: time="2025-01-30T05:27:59.925590218Z" level=info msg="Loading containers: start." Jan 30 05:28:00.117968 kernel: Initializing XFRM netlink socket Jan 30 05:28:00.181235 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 05:28:00.192010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:00.262553 systemd-networkd[1397]: docker0: Link UP Jan 30 05:28:00.303628 dockerd[1885]: time="2025-01-30T05:28:00.303556411Z" level=info msg="Loading containers: done." Jan 30 05:28:00.340588 dockerd[1885]: time="2025-01-30T05:28:00.340507178Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 05:28:00.341231 dockerd[1885]: time="2025-01-30T05:28:00.341146889Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 05:28:00.341620 dockerd[1885]: time="2025-01-30T05:28:00.341568548Z" level=info msg="Daemon has completed initialization" Jan 30 05:28:00.396535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:00.405200 dockerd[1885]: time="2025-01-30T05:28:00.404991404Z" level=info msg="API listen on /run/docker.sock" Jan 30 05:28:00.406044 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 05:28:00.406445 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:00.468666 kubelet[2016]: E0130 05:28:00.468591 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:00.471404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:00.471581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:01.443584 containerd[1502]: time="2025-01-30T05:28:01.443498790Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 05:28:02.187578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233829066.mount: Deactivated successfully. Jan 30 05:28:03.313264 containerd[1502]: time="2025-01-30T05:28:03.313200761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:03.314372 containerd[1502]: time="2025-01-30T05:28:03.314335989Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674916" Jan 30 05:28:03.315607 containerd[1502]: time="2025-01-30T05:28:03.315568479Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:03.318525 containerd[1502]: time="2025-01-30T05:28:03.318326465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:03.321715 containerd[1502]: time="2025-01-30T05:28:03.320787738Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.877212494s" Jan 30 05:28:03.321715 containerd[1502]: time="2025-01-30T05:28:03.320824333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 05:28:03.322060 containerd[1502]: time="2025-01-30T05:28:03.322030776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 05:28:04.985084 containerd[1502]: time="2025-01-30T05:28:04.985013528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:04.986273 containerd[1502]: time="2025-01-30T05:28:04.986216020Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770731" Jan 30 05:28:04.987322 containerd[1502]: time="2025-01-30T05:28:04.987253189Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:04.989805 containerd[1502]: time="2025-01-30T05:28:04.989763031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:04.991068 containerd[1502]: time="2025-01-30T05:28:04.990926463Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.668867267s" Jan 30 05:28:04.991068 containerd[1502]: time="2025-01-30T05:28:04.990953632Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 05:28:04.991601 containerd[1502]: time="2025-01-30T05:28:04.991584190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 05:28:06.084081 update_engine[1474]: I20250130 05:28:06.083997 1474 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 05:28:06.084747 update_engine[1474]: I20250130 05:28:06.084151 1474 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 05:28:06.084747 update_engine[1474]: I20250130 05:28:06.084384 1474 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 05:28:06.085487 update_engine[1474]: I20250130 05:28:06.085325 1474 omaha_request_params.cc:62] Current group set to lts Jan 30 05:28:06.085487 update_engine[1474]: I20250130 05:28:06.085433 1474 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 05:28:06.085487 update_engine[1474]: I20250130 05:28:06.085441 1474 update_attempter.cc:643] Scheduling an action processor start. Jan 30 05:28:06.085487 update_engine[1474]: I20250130 05:28:06.085458 1474 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 05:28:06.085487 update_engine[1474]: I20250130 05:28:06.085484 1474 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 05:28:06.085616 update_engine[1474]: I20250130 05:28:06.085539 1474 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 05:28:06.085616 update_engine[1474]: I20250130 05:28:06.085547 1474 omaha_request_action.cc:272] Request: Jan 30 05:28:06.085616 update_engine[1474]: Jan 30 05:28:06.085616 update_engine[1474]: Jan 30 05:28:06.085616 update_engine[1474]: Jan 30 05:28:06.085616 update_engine[1474]: Jan 30 05:28:06.085616 update_engine[1474]: Jan 30 05:28:06.085616 update_engine[1474]: Jan 30 05:28:06.085616 update_engine[1474]: Jan 30 05:28:06.085616 update_engine[1474]: Jan 30 05:28:06.085616 update_engine[1474]: I20250130 05:28:06.085556 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:28:06.086602 update_engine[1474]: I20250130 05:28:06.086575 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:28:06.086862 update_engine[1474]: I20250130 05:28:06.086833 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:28:06.087521 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 05:28:06.088200 update_engine[1474]: E20250130 05:28:06.088099 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:28:06.088200 update_engine[1474]: I20250130 05:28:06.088151 1474 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 05:28:06.163235 containerd[1502]: time="2025-01-30T05:28:06.163138975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:06.164630 containerd[1502]: time="2025-01-30T05:28:06.164545343Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169779" Jan 30 05:28:06.166215 containerd[1502]: time="2025-01-30T05:28:06.166170662Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:06.169987 containerd[1502]: time="2025-01-30T05:28:06.169944261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:06.171198 containerd[1502]: time="2025-01-30T05:28:06.171073986Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.179303005s" Jan 30 05:28:06.171198 containerd[1502]: time="2025-01-30T05:28:06.171100914Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 05:28:06.172209 containerd[1502]: time="2025-01-30T05:28:06.172173096Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 05:28:07.357489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82001639.mount: Deactivated successfully. Jan 30 05:28:07.685624 containerd[1502]: time="2025-01-30T05:28:07.685499838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:07.686593 containerd[1502]: time="2025-01-30T05:28:07.686561777Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909492" Jan 30 05:28:07.687867 containerd[1502]: time="2025-01-30T05:28:07.687829544Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:07.690138 containerd[1502]: time="2025-01-30T05:28:07.690095865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:07.690732 containerd[1502]: time="2025-01-30T05:28:07.690557040Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.518345917s" Jan 30 05:28:07.690732 containerd[1502]: time="2025-01-30T05:28:07.690584579Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 05:28:07.691156 containerd[1502]: time="2025-01-30T05:28:07.691021089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 05:28:08.340265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116288658.mount: Deactivated successfully. Jan 30 05:28:09.388134 containerd[1502]: time="2025-01-30T05:28:09.388064024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:09.389379 containerd[1502]: time="2025-01-30T05:28:09.389331843Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565333" Jan 30 05:28:09.390685 containerd[1502]: time="2025-01-30T05:28:09.390642327Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:09.393918 containerd[1502]: time="2025-01-30T05:28:09.393807534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:09.395088 containerd[1502]: time="2025-01-30T05:28:09.394954655Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.703909382s" Jan 30 05:28:09.395088 containerd[1502]: time="2025-01-30T05:28:09.394985842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 05:28:09.395703 containerd[1502]: time="2025-01-30T05:28:09.395685809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 05:28:09.984244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1120810964.mount: Deactivated successfully. Jan 30 05:28:09.999201 containerd[1502]: time="2025-01-30T05:28:09.999079644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:10.000773 containerd[1502]: time="2025-01-30T05:28:10.000681702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321158" Jan 30 05:28:10.002553 containerd[1502]: time="2025-01-30T05:28:10.002443734Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:10.007114 containerd[1502]: time="2025-01-30T05:28:10.006994190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:10.008861 containerd[1502]: time="2025-01-30T05:28:10.008633923Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 612.864555ms" Jan 30 05:28:10.008861 containerd[1502]: time="2025-01-30T05:28:10.008691858Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 05:28:10.010032 containerd[1502]: time="2025-01-30T05:28:10.009652147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 05:28:10.534743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 05:28:10.544154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:10.684605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3029381290.mount: Deactivated successfully. Jan 30 05:28:10.866158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:10.877053 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:28:10.965800 kubelet[2186]: E0130 05:28:10.965722 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:28:10.971478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:28:10.971952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:28:12.314140 containerd[1502]: time="2025-01-30T05:28:12.314051749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:12.315634 containerd[1502]: time="2025-01-30T05:28:12.315582177Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551382" Jan 30 05:28:12.319664 containerd[1502]: time="2025-01-30T05:28:12.319604443Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:12.324270 containerd[1502]: time="2025-01-30T05:28:12.324203042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:12.327534 containerd[1502]: time="2025-01-30T05:28:12.327482805Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.317769058s" Jan 30 05:28:12.327591 containerd[1502]: time="2025-01-30T05:28:12.327533166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 05:28:14.904666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:14.917391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:14.958628 systemd[1]: Reloading requested from client PID 2263 ('systemctl') (unit session-7.scope)... Jan 30 05:28:14.958652 systemd[1]: Reloading... Jan 30 05:28:15.121200 zram_generator::config[2315]: No configuration found. Jan 30 05:28:15.227554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:28:15.314539 systemd[1]: Reloading finished in 355 ms. Jan 30 05:28:15.366008 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 05:28:15.366115 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 05:28:15.366399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:15.373331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:15.548471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:15.559200 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:28:15.606842 kubelet[2355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:28:15.606842 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 05:28:15.606842 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:28:15.607337 kubelet[2355]: I0130 05:28:15.606915 2355 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:28:16.080725 update_engine[1474]: I20250130 05:28:16.079954 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:28:16.080725 update_engine[1474]: I20250130 05:28:16.080361 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:28:16.080725 update_engine[1474]: I20250130 05:28:16.080659 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:28:16.082371 update_engine[1474]: E20250130 05:28:16.082245 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:28:16.082371 update_engine[1474]: I20250130 05:28:16.082331 1474 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 05:28:16.251535 kubelet[2355]: I0130 05:28:16.251436 2355 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 05:28:16.251535 kubelet[2355]: I0130 05:28:16.251477 2355 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:28:16.251844 kubelet[2355]: I0130 05:28:16.251713 2355 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 05:28:16.307668 kubelet[2355]: E0130 05:28:16.307401 2355 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://128.140.113.241:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:16.308016 kubelet[2355]: I0130 05:28:16.307848 2355 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:28:16.340324 kubelet[2355]: E0130 05:28:16.340138 2355 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 05:28:16.340324 kubelet[2355]: I0130 05:28:16.340190 2355 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 05:28:16.350216 kubelet[2355]: I0130 05:28:16.350128 2355 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:28:16.357211 kubelet[2355]: I0130 05:28:16.357093 2355 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:28:16.357532 kubelet[2355]: I0130 05:28:16.357194 2355 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-d-6ba27b8de2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 05:28:16.357532 kubelet[2355]: I0130 05:28:16.357529 2355 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:28:16.357751 kubelet[2355]: I0130 05:28:16.357549 2355 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 05:28:16.359667 kubelet[2355]: I0130 05:28:16.359623 2355 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:28:16.371119 kubelet[2355]: I0130 05:28:16.371071 2355 kubelet.go:446] "Attempting to sync node with API server" Jan 30 05:28:16.371182 kubelet[2355]: I0130 05:28:16.371122 2355 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:28:16.371182 kubelet[2355]: I0130 05:28:16.371156 2355 kubelet.go:352] "Adding apiserver pod source" Jan 30 05:28:16.371182 kubelet[2355]: I0130 05:28:16.371174 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:28:16.386109 kubelet[2355]: I0130 05:28:16.385710 2355 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:28:16.393945 kubelet[2355]: W0130 05:28:16.392955 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://128.140.113.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-6ba27b8de2&limit=500&resourceVersion=0": dial tcp 128.140.113.241:6443: connect: connection refused Jan 30 05:28:16.393945 kubelet[2355]: W0130 05:28:16.392961 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://128.140.113.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 128.140.113.241:6443: connect: connection refused Jan 30 05:28:16.393945 kubelet[2355]: E0130 05:28:16.393030 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://128.140.113.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-6ba27b8de2&limit=500&resourceVersion=0\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:16.393945 kubelet[2355]: E0130 05:28:16.393074 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://128.140.113.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:16.393945 kubelet[2355]: I0130 05:28:16.393524 2355 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:28:16.393945 kubelet[2355]: W0130 05:28:16.393595 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 05:28:16.396325 kubelet[2355]: I0130 05:28:16.396282 2355 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 05:28:16.396608 kubelet[2355]: I0130 05:28:16.396564 2355 server.go:1287] "Started kubelet" Jan 30 05:28:16.397625 kubelet[2355]: I0130 05:28:16.397527 2355 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:28:16.403960 kubelet[2355]: I0130 05:28:16.403291 2355 server.go:490] "Adding debug handlers to kubelet server" Jan 30 05:28:16.407457 kubelet[2355]: I0130 05:28:16.407393 2355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:28:16.407716 kubelet[2355]: I0130 05:28:16.407688 2355 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:28:16.409737 kubelet[2355]: I0130 05:28:16.409709 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:28:16.414770 kubelet[2355]: E0130 05:28:16.410809 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://128.140.113.241:6443/api/v1/namespaces/default/events\": dial tcp 128.140.113.241:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-d-6ba27b8de2.181f613d2301b949 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-d-6ba27b8de2,UID:ci-4081-3-0-d-6ba27b8de2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-d-6ba27b8de2,},FirstTimestamp:2025-01-30 05:28:16.396515657 +0000 UTC m=+0.833662154,LastTimestamp:2025-01-30 05:28:16.396515657 +0000 UTC m=+0.833662154,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-d-6ba27b8de2,}" Jan 30 05:28:16.415081 kubelet[2355]: I0130 05:28:16.414818 2355 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 05:28:16.420120 kubelet[2355]: E0130 05:28:16.419699 2355 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" Jan 30 05:28:16.420120 kubelet[2355]: I0130 05:28:16.419736 2355 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 05:28:16.422759 kubelet[2355]: I0130 05:28:16.421950 2355 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:28:16.422759 kubelet[2355]: I0130 05:28:16.422059 2355 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:28:16.422759 kubelet[2355]: W0130 05:28:16.422509 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://128.140.113.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 128.140.113.241:6443: connect: connection refused Jan 30 05:28:16.422759 kubelet[2355]: E0130 05:28:16.422555 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://128.140.113.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:16.423718 kubelet[2355]: I0130 05:28:16.423674 2355 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:28:16.423807 kubelet[2355]: I0130 05:28:16.423781 2355 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:28:16.426210 kubelet[2355]: E0130 05:28:16.426166 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.113.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-6ba27b8de2?timeout=10s\": dial tcp 128.140.113.241:6443: connect: connection refused" interval="200ms" Jan 30 05:28:16.426279 kubelet[2355]: I0130 05:28:16.426262 2355 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:28:16.436113 kubelet[2355]: E0130 05:28:16.435945 2355 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:28:16.440500 kubelet[2355]: I0130 05:28:16.439571 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:28:16.441823 kubelet[2355]: I0130 05:28:16.441806 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:28:16.442240 kubelet[2355]: I0130 05:28:16.441932 2355 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 05:28:16.442240 kubelet[2355]: I0130 05:28:16.441960 2355 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 05:28:16.442240 kubelet[2355]: I0130 05:28:16.441968 2355 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 05:28:16.442240 kubelet[2355]: E0130 05:28:16.442017 2355 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:28:16.450145 kubelet[2355]: W0130 05:28:16.450097 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://128.140.113.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 128.140.113.241:6443: connect: connection refused Jan 30 05:28:16.450232 kubelet[2355]: E0130 05:28:16.450145 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://128.140.113.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:16.458787 kubelet[2355]: I0130 05:28:16.458543 2355 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 05:28:16.458787 kubelet[2355]: I0130 05:28:16.458559 2355 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 05:28:16.458787 kubelet[2355]: I0130 05:28:16.458584 2355 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:28:16.461678 kubelet[2355]: I0130 05:28:16.461466 2355 policy_none.go:49] "None policy: Start" Jan 30 05:28:16.461678 kubelet[2355]: I0130 05:28:16.461481 2355 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 05:28:16.461678 kubelet[2355]: I0130 05:28:16.461491 2355 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:28:16.468856 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 05:28:16.495607 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 05:28:16.501547 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 05:28:16.513124 kubelet[2355]: I0130 05:28:16.513091 2355 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:28:16.513599 kubelet[2355]: I0130 05:28:16.513576 2355 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 05:28:16.513744 kubelet[2355]: I0130 05:28:16.513698 2355 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:28:16.519910 kubelet[2355]: I0130 05:28:16.519823 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:28:16.524093 kubelet[2355]: E0130 05:28:16.524058 2355 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 05:28:16.524253 kubelet[2355]: E0130 05:28:16.524111 2355 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-d-6ba27b8de2\" not found" Jan 30 05:28:16.565451 systemd[1]: Created slice kubepods-burstable-pod34312bfdc02fc2fca0b95dca7c16cd92.slice - libcontainer container kubepods-burstable-pod34312bfdc02fc2fca0b95dca7c16cd92.slice. Jan 30 05:28:16.583829 kubelet[2355]: E0130 05:28:16.583517 2355 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.587858 systemd[1]: Created slice kubepods-burstable-pod6d39855df0ce33a9e231503b0dd1c898.slice - libcontainer container kubepods-burstable-pod6d39855df0ce33a9e231503b0dd1c898.slice. Jan 30 05:28:16.593455 kubelet[2355]: E0130 05:28:16.593356 2355 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.595451 systemd[1]: Created slice kubepods-burstable-pod9613cd561afc65e3429f9c1f2680ce92.slice - libcontainer container kubepods-burstable-pod9613cd561afc65e3429f9c1f2680ce92.slice. Jan 30 05:28:16.598771 kubelet[2355]: E0130 05:28:16.598724 2355 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.617151 kubelet[2355]: I0130 05:28:16.617116 2355 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.617609 kubelet[2355]: E0130 05:28:16.617438 2355 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://128.140.113.241:6443/api/v1/nodes\": dial tcp 128.140.113.241:6443: connect: connection refused" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624114 kubelet[2355]: I0130 05:28:16.624070 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624114 kubelet[2355]: I0130 05:28:16.624114 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34312bfdc02fc2fca0b95dca7c16cd92-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" (UID: \"34312bfdc02fc2fca0b95dca7c16cd92\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624349 kubelet[2355]: I0130 05:28:16.624139 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34312bfdc02fc2fca0b95dca7c16cd92-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" (UID: \"34312bfdc02fc2fca0b95dca7c16cd92\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624349 kubelet[2355]: I0130 05:28:16.624166 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34312bfdc02fc2fca0b95dca7c16cd92-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" (UID: \"34312bfdc02fc2fca0b95dca7c16cd92\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624349 kubelet[2355]: I0130 05:28:16.624193 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624349 kubelet[2355]: I0130 05:28:16.624216 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624349 kubelet[2355]: I0130 05:28:16.624238 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624463 kubelet[2355]: I0130 05:28:16.624262 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.624463 kubelet[2355]: I0130 05:28:16.624285 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9613cd561afc65e3429f9c1f2680ce92-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-d-6ba27b8de2\" (UID: \"9613cd561afc65e3429f9c1f2680ce92\") " pod="kube-system/kube-scheduler-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.626572 kubelet[2355]: E0130 05:28:16.626519 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.113.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-6ba27b8de2?timeout=10s\": dial tcp 128.140.113.241:6443: connect: connection refused" interval="400ms" Jan 30 05:28:16.821108 kubelet[2355]: I0130 05:28:16.821013 2355 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.821677 kubelet[2355]: E0130 05:28:16.821626 2355 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://128.140.113.241:6443/api/v1/nodes\": dial tcp 128.140.113.241:6443: connect: connection refused" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:16.886021 containerd[1502]: time="2025-01-30T05:28:16.885369702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-d-6ba27b8de2,Uid:34312bfdc02fc2fca0b95dca7c16cd92,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:16.907127 containerd[1502]: time="2025-01-30T05:28:16.907014846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-d-6ba27b8de2,Uid:6d39855df0ce33a9e231503b0dd1c898,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:16.907665 containerd[1502]: time="2025-01-30T05:28:16.907055611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-d-6ba27b8de2,Uid:9613cd561afc65e3429f9c1f2680ce92,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:17.027424 kubelet[2355]: E0130 05:28:17.027343 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.113.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-6ba27b8de2?timeout=10s\": dial tcp 128.140.113.241:6443: connect: connection refused" interval="800ms" Jan 30 05:28:17.225326 kubelet[2355]: I0130 05:28:17.225162 2355 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:17.225949 kubelet[2355]: E0130 05:28:17.225800 2355 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://128.140.113.241:6443/api/v1/nodes\": dial tcp 128.140.113.241:6443: connect: connection refused" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:17.414620 kubelet[2355]: W0130 05:28:17.414524 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://128.140.113.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 128.140.113.241:6443: connect: connection refused Jan 30 05:28:17.414620 kubelet[2355]: E0130 05:28:17.414611 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://128.140.113.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:17.455755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290235528.mount: Deactivated successfully. Jan 30 05:28:17.462350 kubelet[2355]: W0130 05:28:17.462280 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://128.140.113.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 128.140.113.241:6443: connect: connection refused Jan 30 05:28:17.462584 kubelet[2355]: E0130 05:28:17.462360 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://128.140.113.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:17.471362 containerd[1502]: time="2025-01-30T05:28:17.471277866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:17.473319 containerd[1502]: time="2025-01-30T05:28:17.473230544Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:17.475143 containerd[1502]: time="2025-01-30T05:28:17.475053624Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:28:17.476747 containerd[1502]: time="2025-01-30T05:28:17.476567200Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:17.479117 containerd[1502]: time="2025-01-30T05:28:17.479042102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:28:17.481171 containerd[1502]: time="2025-01-30T05:28:17.481100032Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:17.482194 containerd[1502]: time="2025-01-30T05:28:17.482080182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Jan 30 05:28:17.488768 containerd[1502]: time="2025-01-30T05:28:17.488656878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:28:17.490450 containerd[1502]: time="2025-01-30T05:28:17.490257173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.770526ms" Jan 30 05:28:17.495455 containerd[1502]: time="2025-01-30T05:28:17.495299987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.173306ms" Jan 30 05:28:17.498404 containerd[1502]: time="2025-01-30T05:28:17.498206707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.711045ms" Jan 30 05:28:17.748986 containerd[1502]: time="2025-01-30T05:28:17.747643755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:17.749606 containerd[1502]: time="2025-01-30T05:28:17.749371221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:17.749606 containerd[1502]: time="2025-01-30T05:28:17.749389965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:17.751067 containerd[1502]: time="2025-01-30T05:28:17.750170702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:17.753931 containerd[1502]: time="2025-01-30T05:28:17.753767905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:17.753931 containerd[1502]: time="2025-01-30T05:28:17.753864169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:17.754286 containerd[1502]: time="2025-01-30T05:28:17.753919772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:17.754286 containerd[1502]: time="2025-01-30T05:28:17.754104950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:17.762114 containerd[1502]: time="2025-01-30T05:28:17.761304624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:17.762114 containerd[1502]: time="2025-01-30T05:28:17.761378479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:17.762114 containerd[1502]: time="2025-01-30T05:28:17.761392865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:17.762114 containerd[1502]: time="2025-01-30T05:28:17.761487157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:17.778109 systemd[1]: Started cri-containerd-1226443511815d02c43912896bc8ae72b0cdb7794096f672327ae18b6696613c.scope - libcontainer container 1226443511815d02c43912896bc8ae72b0cdb7794096f672327ae18b6696613c. Jan 30 05:28:17.784606 kubelet[2355]: W0130 05:28:17.784366 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://128.140.113.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 128.140.113.241:6443: connect: connection refused Jan 30 05:28:17.790710 kubelet[2355]: E0130 05:28:17.790212 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://128.140.113.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:17.801059 systemd[1]: Started cri-containerd-54a90574fd5dab35f5c09748d776617c4bbda76d9a77747ce01de1bd96dbf34e.scope - libcontainer container 54a90574fd5dab35f5c09748d776617c4bbda76d9a77747ce01de1bd96dbf34e. Jan 30 05:28:17.805193 systemd[1]: Started cri-containerd-1e36176438ed0a2ef4c9c9051204537d8568373e41788469830597bdd41e1945.scope - libcontainer container 1e36176438ed0a2ef4c9c9051204537d8568373e41788469830597bdd41e1945. Jan 30 05:28:17.833966 kubelet[2355]: E0130 05:28:17.833836 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://128.140.113.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-d-6ba27b8de2?timeout=10s\": dial tcp 128.140.113.241:6443: connect: connection refused" interval="1.6s" Jan 30 05:28:17.845143 containerd[1502]: time="2025-01-30T05:28:17.844953496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-d-6ba27b8de2,Uid:6d39855df0ce33a9e231503b0dd1c898,Namespace:kube-system,Attempt:0,} returns sandbox id \"1226443511815d02c43912896bc8ae72b0cdb7794096f672327ae18b6696613c\"" Jan 30 05:28:17.853765 containerd[1502]: time="2025-01-30T05:28:17.853648072Z" level=info msg="CreateContainer within sandbox \"1226443511815d02c43912896bc8ae72b0cdb7794096f672327ae18b6696613c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 05:28:17.875563 containerd[1502]: time="2025-01-30T05:28:17.875085229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-d-6ba27b8de2,Uid:34312bfdc02fc2fca0b95dca7c16cd92,Namespace:kube-system,Attempt:0,} returns sandbox id \"54a90574fd5dab35f5c09748d776617c4bbda76d9a77747ce01de1bd96dbf34e\"" Jan 30 05:28:17.878731 containerd[1502]: time="2025-01-30T05:28:17.878612273Z" level=info msg="CreateContainer within sandbox \"54a90574fd5dab35f5c09748d776617c4bbda76d9a77747ce01de1bd96dbf34e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 05:28:17.892783 containerd[1502]: time="2025-01-30T05:28:17.892748381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-d-6ba27b8de2,Uid:9613cd561afc65e3429f9c1f2680ce92,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e36176438ed0a2ef4c9c9051204537d8568373e41788469830597bdd41e1945\"" Jan 30 05:28:17.898625 containerd[1502]: time="2025-01-30T05:28:17.898577611Z" level=info msg="CreateContainer within sandbox \"1e36176438ed0a2ef4c9c9051204537d8568373e41788469830597bdd41e1945\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 05:28:17.901216 containerd[1502]: time="2025-01-30T05:28:17.901028008Z" level=info msg="CreateContainer within sandbox \"1226443511815d02c43912896bc8ae72b0cdb7794096f672327ae18b6696613c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c\"" Jan 30 05:28:17.904212 containerd[1502]: time="2025-01-30T05:28:17.903796927Z" level=info msg="CreateContainer within sandbox \"54a90574fd5dab35f5c09748d776617c4bbda76d9a77747ce01de1bd96dbf34e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c13eb0a651d270799a14f8fbb9d832b00e10fcc6266405ad3bcf59f008321da4\"" Jan 30 05:28:17.905193 containerd[1502]: time="2025-01-30T05:28:17.905045018Z" level=info msg="StartContainer for \"c13eb0a651d270799a14f8fbb9d832b00e10fcc6266405ad3bcf59f008321da4\"" Jan 30 05:28:17.905309 containerd[1502]: time="2025-01-30T05:28:17.905291228Z" level=info msg="StartContainer for \"f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c\"" Jan 30 05:28:17.920193 containerd[1502]: time="2025-01-30T05:28:17.920152309Z" level=info msg="CreateContainer within sandbox \"1e36176438ed0a2ef4c9c9051204537d8568373e41788469830597bdd41e1945\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8\"" Jan 30 05:28:17.921369 containerd[1502]: time="2025-01-30T05:28:17.921333879Z" level=info msg="StartContainer for \"69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8\"" Jan 30 05:28:17.941634 kubelet[2355]: W0130 05:28:17.941154 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://128.140.113.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-6ba27b8de2&limit=500&resourceVersion=0": dial tcp 128.140.113.241:6443: connect: connection refused Jan 30 05:28:17.941634 kubelet[2355]: E0130 05:28:17.941215 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://128.140.113.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-d-6ba27b8de2&limit=500&resourceVersion=0\": dial tcp 128.140.113.241:6443: connect: connection refused" logger="UnhandledError" Jan 30 05:28:17.944358 systemd[1]: Started cri-containerd-c13eb0a651d270799a14f8fbb9d832b00e10fcc6266405ad3bcf59f008321da4.scope - libcontainer container c13eb0a651d270799a14f8fbb9d832b00e10fcc6266405ad3bcf59f008321da4. Jan 30 05:28:17.962131 systemd[1]: Started cri-containerd-f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c.scope - libcontainer container f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c. Jan 30 05:28:17.976505 systemd[1]: Started cri-containerd-69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8.scope - libcontainer container 69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8. Jan 30 05:28:18.030838 kubelet[2355]: I0130 05:28:18.030602 2355 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:18.031910 containerd[1502]: time="2025-01-30T05:28:18.030590394Z" level=info msg="StartContainer for \"c13eb0a651d270799a14f8fbb9d832b00e10fcc6266405ad3bcf59f008321da4\" returns successfully" Jan 30 05:28:18.031981 kubelet[2355]: E0130 05:28:18.031067 2355 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://128.140.113.241:6443/api/v1/nodes\": dial tcp 128.140.113.241:6443: connect: connection refused" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:18.044010 containerd[1502]: time="2025-01-30T05:28:18.043945509Z" level=info msg="StartContainer for \"f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c\" returns successfully" Jan 30 05:28:18.050584 containerd[1502]: time="2025-01-30T05:28:18.050546431Z" level=info msg="StartContainer for \"69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8\" returns successfully" Jan 30 05:28:18.461301 kubelet[2355]: E0130 05:28:18.461263 2355 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:18.462215 kubelet[2355]: E0130 05:28:18.462180 2355 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:18.463529 kubelet[2355]: E0130 05:28:18.463506 2355 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:19.472548 kubelet[2355]: E0130 05:28:19.472459 2355 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:19.473205 kubelet[2355]: E0130 05:28:19.473167 2355 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:19.632982 kubelet[2355]: I0130 05:28:19.632949 2355 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:19.689735 kubelet[2355]: E0130 05:28:19.689669 2355 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-d-6ba27b8de2\" not found" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:19.771555 kubelet[2355]: I0130 05:28:19.771045 2355 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:19.771555 kubelet[2355]: E0130 05:28:19.771077 2355 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081-3-0-d-6ba27b8de2\": node \"ci-4081-3-0-d-6ba27b8de2\" not found" Jan 30 05:28:19.779173 kubelet[2355]: E0130 05:28:19.779079 2355 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" Jan 30 05:28:19.879244 kubelet[2355]: E0130 05:28:19.879183 2355 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" Jan 30 05:28:20.027157 kubelet[2355]: I0130 05:28:20.026387 2355 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:20.035177 kubelet[2355]: E0130 05:28:20.034815 2355 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-d-6ba27b8de2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:20.035177 kubelet[2355]: I0130 05:28:20.034859 2355 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:20.037785 kubelet[2355]: E0130 05:28:20.037708 2355 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:20.037785 kubelet[2355]: I0130 05:28:20.037773 2355 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:20.040539 kubelet[2355]: E0130 05:28:20.040498 2355 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:20.379255 kubelet[2355]: I0130 05:28:20.379154 2355 apiserver.go:52] "Watching apiserver" Jan 30 05:28:20.423053 kubelet[2355]: I0130 05:28:20.422858 2355 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:28:22.059724 systemd[1]: Reloading requested from client PID 2630 ('systemctl') (unit session-7.scope)... Jan 30 05:28:22.059769 systemd[1]: Reloading... Jan 30 05:28:22.224935 zram_generator::config[2673]: No configuration found. Jan 30 05:28:22.358487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:28:22.368160 kubelet[2355]: I0130 05:28:22.366157 2355 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:22.478757 systemd[1]: Reloading finished in 417 ms. Jan 30 05:28:22.533125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:22.553346 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:28:22.553722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:22.553795 systemd[1]: kubelet.service: Consumed 1.351s CPU time, 125.3M memory peak, 0B memory swap peak. Jan 30 05:28:22.559467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:28:22.831934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:28:22.850560 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:28:22.919910 kubelet[2720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:28:22.919910 kubelet[2720]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 05:28:22.919910 kubelet[2720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:28:22.920289 kubelet[2720]: I0130 05:28:22.919978 2720 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:28:22.930902 kubelet[2720]: I0130 05:28:22.930342 2720 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 05:28:22.930902 kubelet[2720]: I0130 05:28:22.930368 2720 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:28:22.931513 kubelet[2720]: I0130 05:28:22.931491 2720 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 05:28:22.934446 kubelet[2720]: I0130 05:28:22.934422 2720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 05:28:22.940003 kubelet[2720]: I0130 05:28:22.939519 2720 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:28:22.947781 kubelet[2720]: E0130 05:28:22.947719 2720 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 05:28:22.947781 kubelet[2720]: I0130 05:28:22.947770 2720 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 05:28:22.952642 kubelet[2720]: I0130 05:28:22.952600 2720 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:28:22.953559 kubelet[2720]: I0130 05:28:22.953498 2720 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:28:22.953728 kubelet[2720]: I0130 05:28:22.953536 2720 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-d-6ba27b8de2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 05:28:22.953728 kubelet[2720]: I0130 05:28:22.953722 2720 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:28:22.953728 kubelet[2720]: I0130 05:28:22.953731 2720 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 05:28:22.953951 kubelet[2720]: I0130 05:28:22.953769 2720 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:28:22.953984 kubelet[2720]: I0130 05:28:22.953959 2720 kubelet.go:446] "Attempting to sync node with API server" Jan 30 05:28:22.953984 kubelet[2720]: I0130 05:28:22.953977 2720 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:28:22.954053 kubelet[2720]: I0130 05:28:22.953993 2720 kubelet.go:352] "Adding apiserver pod source" Jan 30 05:28:22.956600 kubelet[2720]: I0130 05:28:22.954713 2720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:28:22.959928 kubelet[2720]: I0130 05:28:22.959857 2720 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:28:22.960315 kubelet[2720]: I0130 05:28:22.960280 2720 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:28:22.961933 kubelet[2720]: I0130 05:28:22.960823 2720 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 05:28:22.961933 kubelet[2720]: I0130 05:28:22.960863 2720 server.go:1287] "Started kubelet" Jan 30 05:28:22.969226 kubelet[2720]: I0130 05:28:22.969201 2720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:28:22.979197 kubelet[2720]: I0130 05:28:22.978080 2720 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:28:22.979902 kubelet[2720]: I0130 05:28:22.979851 2720 server.go:490] "Adding debug handlers to kubelet server" Jan 30 05:28:22.983563 kubelet[2720]: I0130 05:28:22.983229 2720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:28:22.984274 kubelet[2720]: I0130 05:28:22.984225 2720 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:28:22.984531 kubelet[2720]: I0130 05:28:22.984444 2720 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 05:28:22.986665 kubelet[2720]: I0130 05:28:22.986624 2720 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 05:28:22.986845 kubelet[2720]: E0130 05:28:22.986807 2720 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081-3-0-d-6ba27b8de2\" not found" Jan 30 05:28:22.987966 kubelet[2720]: I0130 05:28:22.987945 2720 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:28:22.988135 kubelet[2720]: I0130 05:28:22.988061 2720 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:28:22.989302 kubelet[2720]: I0130 05:28:22.989212 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:28:22.990951 kubelet[2720]: I0130 05:28:22.990873 2720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:28:22.991121 kubelet[2720]: I0130 05:28:22.991042 2720 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 05:28:22.991121 kubelet[2720]: I0130 05:28:22.991077 2720 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 05:28:22.991121 kubelet[2720]: I0130 05:28:22.991085 2720 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 05:28:22.991396 kubelet[2720]: E0130 05:28:22.991327 2720 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:28:22.997837 kubelet[2720]: I0130 05:28:22.997503 2720 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:28:22.997837 kubelet[2720]: I0130 05:28:22.997584 2720 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:28:23.001511 kubelet[2720]: I0130 05:28:23.001289 2720 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:28:23.061088 kubelet[2720]: I0130 05:28:23.061044 2720 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 05:28:23.061088 kubelet[2720]: I0130 05:28:23.061065 2720 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 05:28:23.061088 kubelet[2720]: I0130 05:28:23.061081 2720 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:28:23.061272 kubelet[2720]: I0130 05:28:23.061208 2720 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 05:28:23.061272 kubelet[2720]: I0130 05:28:23.061219 2720 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 05:28:23.061272 kubelet[2720]: I0130 05:28:23.061237 2720 policy_none.go:49] "None policy: Start" Jan 30 05:28:23.061272 kubelet[2720]: I0130 05:28:23.061245 2720 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 05:28:23.061272 kubelet[2720]: I0130 05:28:23.061255 2720 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:28:23.061404 kubelet[2720]: I0130 05:28:23.061338 2720 state_mem.go:75] "Updated machine memory state" Jan 30 05:28:23.065724 kubelet[2720]: I0130 05:28:23.065694 2720 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:28:23.067432 kubelet[2720]: I0130 05:28:23.065853 2720 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 05:28:23.067432 kubelet[2720]: I0130 05:28:23.065869 2720 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:28:23.067432 kubelet[2720]: I0130 05:28:23.066449 2720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:28:23.069040 kubelet[2720]: E0130 05:28:23.069013 2720 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 05:28:23.092849 kubelet[2720]: I0130 05:28:23.092619 2720 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.097245 kubelet[2720]: I0130 05:28:23.096988 2720 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.098529 kubelet[2720]: I0130 05:28:23.098050 2720 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.107352 kubelet[2720]: E0130 05:28:23.106932 2720 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.174042 kubelet[2720]: I0130 05:28:23.173188 2720 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.186477 kubelet[2720]: I0130 05:28:23.186130 2720 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.186477 kubelet[2720]: I0130 05:28:23.186241 2720 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.188869 kubelet[2720]: I0130 05:28:23.188841 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9613cd561afc65e3429f9c1f2680ce92-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-d-6ba27b8de2\" (UID: \"9613cd561afc65e3429f9c1f2680ce92\") " pod="kube-system/kube-scheduler-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.188967 kubelet[2720]: I0130 05:28:23.188953 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34312bfdc02fc2fca0b95dca7c16cd92-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" (UID: \"34312bfdc02fc2fca0b95dca7c16cd92\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.189063 kubelet[2720]: I0130 05:28:23.189049 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.189201 kubelet[2720]: I0130 05:28:23.189145 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.189201 kubelet[2720]: I0130 05:28:23.189174 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.189352 kubelet[2720]: I0130 05:28:23.189282 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.189352 kubelet[2720]: I0130 05:28:23.189301 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d39855df0ce33a9e231503b0dd1c898-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-d-6ba27b8de2\" (UID: \"6d39855df0ce33a9e231503b0dd1c898\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.189352 kubelet[2720]: I0130 05:28:23.189315 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34312bfdc02fc2fca0b95dca7c16cd92-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" (UID: \"34312bfdc02fc2fca0b95dca7c16cd92\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.189352 kubelet[2720]: I0130 05:28:23.189333 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34312bfdc02fc2fca0b95dca7c16cd92-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" (UID: \"34312bfdc02fc2fca0b95dca7c16cd92\") " pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:23.956851 kubelet[2720]: I0130 05:28:23.956598 2720 apiserver.go:52] "Watching apiserver" Jan 30 05:28:23.988827 kubelet[2720]: I0130 05:28:23.988767 2720 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:28:24.031709 kubelet[2720]: I0130 05:28:24.030396 2720 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:24.032373 kubelet[2720]: I0130 05:28:24.030985 2720 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:24.048200 kubelet[2720]: E0130 05:28:24.048137 2720 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-0-d-6ba27b8de2\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:24.060673 kubelet[2720]: E0130 05:28:24.060535 2720 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-0-d-6ba27b8de2\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" Jan 30 05:28:24.095009 kubelet[2720]: I0130 05:28:24.094945 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-d-6ba27b8de2" podStartSLOduration=2.094927713 podStartE2EDuration="2.094927713s" podCreationTimestamp="2025-01-30 05:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:28:24.081883999 +0000 UTC m=+1.220734966" watchObservedRunningTime="2025-01-30 05:28:24.094927713 +0000 UTC m=+1.233778680" Jan 30 05:28:24.104454 kubelet[2720]: I0130 05:28:24.104123 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-d-6ba27b8de2" podStartSLOduration=1.104105874 podStartE2EDuration="1.104105874s" podCreationTimestamp="2025-01-30 05:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:28:24.095168208 +0000 UTC m=+1.234019175" watchObservedRunningTime="2025-01-30 05:28:24.104105874 +0000 UTC m=+1.242956842" Jan 30 05:28:24.112677 kubelet[2720]: I0130 05:28:24.112612 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-d-6ba27b8de2" podStartSLOduration=1.112595283 podStartE2EDuration="1.112595283s" podCreationTimestamp="2025-01-30 05:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:28:24.103712086 +0000 UTC m=+1.242563054" watchObservedRunningTime="2025-01-30 05:28:24.112595283 +0000 UTC m=+1.251446250" Jan 30 05:28:26.079276 update_engine[1474]: I20250130 05:28:26.079141 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:28:26.080146 update_engine[1474]: I20250130 05:28:26.079603 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:28:26.080146 update_engine[1474]: I20250130 05:28:26.079948 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:28:26.081182 update_engine[1474]: E20250130 05:28:26.081102 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:28:26.081313 update_engine[1474]: I20250130 05:28:26.081223 1474 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 05:28:26.898460 kubelet[2720]: I0130 05:28:26.898394 2720 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 05:28:26.899659 containerd[1502]: time="2025-01-30T05:28:26.899608145Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 05:28:26.900731 kubelet[2720]: I0130 05:28:26.900150 2720 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 05:28:27.785957 systemd[1]: Created slice kubepods-besteffort-pod3a264b27_ce6f_4449_b7ad_02f57e12c0ef.slice - libcontainer container kubepods-besteffort-pod3a264b27_ce6f_4449_b7ad_02f57e12c0ef.slice. Jan 30 05:28:27.820632 kubelet[2720]: I0130 05:28:27.820582 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a264b27-ce6f-4449-b7ad-02f57e12c0ef-lib-modules\") pod \"kube-proxy-nw9bc\" (UID: \"3a264b27-ce6f-4449-b7ad-02f57e12c0ef\") " pod="kube-system/kube-proxy-nw9bc" Jan 30 05:28:27.820632 kubelet[2720]: I0130 05:28:27.820653 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a264b27-ce6f-4449-b7ad-02f57e12c0ef-kube-proxy\") pod \"kube-proxy-nw9bc\" (UID: \"3a264b27-ce6f-4449-b7ad-02f57e12c0ef\") " pod="kube-system/kube-proxy-nw9bc" Jan 30 05:28:27.820632 kubelet[2720]: I0130 05:28:27.820703 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a264b27-ce6f-4449-b7ad-02f57e12c0ef-xtables-lock\") pod \"kube-proxy-nw9bc\" (UID: \"3a264b27-ce6f-4449-b7ad-02f57e12c0ef\") " pod="kube-system/kube-proxy-nw9bc" Jan 30 05:28:27.821088 kubelet[2720]: I0130 05:28:27.820727 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmb6\" (UniqueName: \"kubernetes.io/projected/3a264b27-ce6f-4449-b7ad-02f57e12c0ef-kube-api-access-gkmb6\") pod \"kube-proxy-nw9bc\" (UID: \"3a264b27-ce6f-4449-b7ad-02f57e12c0ef\") " pod="kube-system/kube-proxy-nw9bc" Jan 30 05:28:28.077157 systemd[1]: Created slice kubepods-besteffort-pod2e9938d2_8b0c_4385_849d_fdc4211a78d7.slice - libcontainer container kubepods-besteffort-pod2e9938d2_8b0c_4385_849d_fdc4211a78d7.slice. Jan 30 05:28:28.096006 containerd[1502]: time="2025-01-30T05:28:28.095866152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nw9bc,Uid:3a264b27-ce6f-4449-b7ad-02f57e12c0ef,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:28.128732 kubelet[2720]: I0130 05:28:28.125351 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e9938d2-8b0c-4385-849d-fdc4211a78d7-var-lib-calico\") pod \"tigera-operator-7d68577dc5-wtsgf\" (UID: \"2e9938d2-8b0c-4385-849d-fdc4211a78d7\") " pod="tigera-operator/tigera-operator-7d68577dc5-wtsgf" Jan 30 05:28:28.128732 kubelet[2720]: I0130 05:28:28.125426 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ch5p\" (UniqueName: \"kubernetes.io/projected/2e9938d2-8b0c-4385-849d-fdc4211a78d7-kube-api-access-5ch5p\") pod \"tigera-operator-7d68577dc5-wtsgf\" (UID: \"2e9938d2-8b0c-4385-849d-fdc4211a78d7\") " pod="tigera-operator/tigera-operator-7d68577dc5-wtsgf" Jan 30 05:28:28.133215 containerd[1502]: time="2025-01-30T05:28:28.133133325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:28.133342 containerd[1502]: time="2025-01-30T05:28:28.133239202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:28.133342 containerd[1502]: time="2025-01-30T05:28:28.133270770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:28.134046 containerd[1502]: time="2025-01-30T05:28:28.133453460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:28.161485 systemd[1]: Started cri-containerd-a901a9f3fe8f7b30ea36bd35c8454dbf28a4728a1da1ddbc55b83d0b682f478e.scope - libcontainer container a901a9f3fe8f7b30ea36bd35c8454dbf28a4728a1da1ddbc55b83d0b682f478e. Jan 30 05:28:28.208314 containerd[1502]: time="2025-01-30T05:28:28.208273555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nw9bc,Uid:3a264b27-ce6f-4449-b7ad-02f57e12c0ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"a901a9f3fe8f7b30ea36bd35c8454dbf28a4728a1da1ddbc55b83d0b682f478e\"" Jan 30 05:28:28.217582 containerd[1502]: time="2025-01-30T05:28:28.217525094Z" level=info msg="CreateContainer within sandbox \"a901a9f3fe8f7b30ea36bd35c8454dbf28a4728a1da1ddbc55b83d0b682f478e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 05:28:28.250191 containerd[1502]: time="2025-01-30T05:28:28.250146214Z" level=info msg="CreateContainer within sandbox \"a901a9f3fe8f7b30ea36bd35c8454dbf28a4728a1da1ddbc55b83d0b682f478e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20f413631b7e74788c0e2d5e288881ad469f96e510b4428511db76c882af32f0\"" Jan 30 05:28:28.254687 containerd[1502]: time="2025-01-30T05:28:28.252825533Z" level=info msg="StartContainer for \"20f413631b7e74788c0e2d5e288881ad469f96e510b4428511db76c882af32f0\"" Jan 30 05:28:28.264084 sudo[1869]: pam_unix(sudo:session): session closed for user root Jan 30 05:28:28.289231 systemd[1]: Started cri-containerd-20f413631b7e74788c0e2d5e288881ad469f96e510b4428511db76c882af32f0.scope - libcontainer container 20f413631b7e74788c0e2d5e288881ad469f96e510b4428511db76c882af32f0. Jan 30 05:28:28.323416 containerd[1502]: time="2025-01-30T05:28:28.323326973Z" level=info msg="StartContainer for \"20f413631b7e74788c0e2d5e288881ad469f96e510b4428511db76c882af32f0\" returns successfully" Jan 30 05:28:28.383602 containerd[1502]: time="2025-01-30T05:28:28.383459949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-wtsgf,Uid:2e9938d2-8b0c-4385-849d-fdc4211a78d7,Namespace:tigera-operator,Attempt:0,}" Jan 30 05:28:28.421052 containerd[1502]: time="2025-01-30T05:28:28.420942682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:28.421311 containerd[1502]: time="2025-01-30T05:28:28.421272966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:28.421443 containerd[1502]: time="2025-01-30T05:28:28.421401766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:28.421747 containerd[1502]: time="2025-01-30T05:28:28.421705560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:28.425967 sshd[1866]: pam_unix(sshd:session): session closed for user core Jan 30 05:28:28.434276 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Jan 30 05:28:28.436472 systemd[1]: sshd@6-128.140.113.241:22-139.178.89.65:33118.service: Deactivated successfully. Jan 30 05:28:28.442078 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 05:28:28.442596 systemd[1]: session-7.scope: Consumed 5.294s CPU time, 144.6M memory peak, 0B memory swap peak. Jan 30 05:28:28.443998 systemd-logind[1473]: Removed session 7. Jan 30 05:28:28.455036 systemd[1]: Started cri-containerd-d671161a80c7cd14e58ee77cf3621405bcf291d142ac1b1c0ab91d7a028ad457.scope - libcontainer container d671161a80c7cd14e58ee77cf3621405bcf291d142ac1b1c0ab91d7a028ad457. Jan 30 05:28:28.500289 containerd[1502]: time="2025-01-30T05:28:28.500145761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-wtsgf,Uid:2e9938d2-8b0c-4385-849d-fdc4211a78d7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d671161a80c7cd14e58ee77cf3621405bcf291d142ac1b1c0ab91d7a028ad457\"" Jan 30 05:28:28.502201 containerd[1502]: time="2025-01-30T05:28:28.502174149Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 05:28:28.964040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364541096.mount: Deactivated successfully. Jan 30 05:28:29.337143 kubelet[2720]: I0130 05:28:29.336211 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nw9bc" podStartSLOduration=2.336183195 podStartE2EDuration="2.336183195s" podCreationTimestamp="2025-01-30 05:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:28:29.085574957 +0000 UTC m=+6.224425985" watchObservedRunningTime="2025-01-30 05:28:29.336183195 +0000 UTC m=+6.475034201" Jan 30 05:28:30.323824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707415449.mount: Deactivated successfully. Jan 30 05:28:30.787287 containerd[1502]: time="2025-01-30T05:28:30.787192581Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:30.789915 containerd[1502]: time="2025-01-30T05:28:30.789253660Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 05:28:30.791927 containerd[1502]: time="2025-01-30T05:28:30.790704583Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:30.795093 containerd[1502]: time="2025-01-30T05:28:30.795015773Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:30.795922 containerd[1502]: time="2025-01-30T05:28:30.795763976Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.293558698s" Jan 30 05:28:30.795922 containerd[1502]: time="2025-01-30T05:28:30.795807077Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 05:28:30.800295 containerd[1502]: time="2025-01-30T05:28:30.800092779Z" level=info msg="CreateContainer within sandbox \"d671161a80c7cd14e58ee77cf3621405bcf291d142ac1b1c0ab91d7a028ad457\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 05:28:30.816853 containerd[1502]: time="2025-01-30T05:28:30.816813943Z" level=info msg="CreateContainer within sandbox \"d671161a80c7cd14e58ee77cf3621405bcf291d142ac1b1c0ab91d7a028ad457\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f\"" Jan 30 05:28:30.817544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871265154.mount: Deactivated successfully. Jan 30 05:28:30.817902 containerd[1502]: time="2025-01-30T05:28:30.817778048Z" level=info msg="StartContainer for \"84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f\"" Jan 30 05:28:30.860040 systemd[1]: Started cri-containerd-84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f.scope - libcontainer container 84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f. Jan 30 05:28:30.893076 containerd[1502]: time="2025-01-30T05:28:30.892986698Z" level=info msg="StartContainer for \"84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f\" returns successfully" Jan 30 05:28:31.099028 kubelet[2720]: I0130 05:28:31.097997 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-wtsgf" podStartSLOduration=1.801556565 podStartE2EDuration="4.097968405s" podCreationTimestamp="2025-01-30 05:28:27 +0000 UTC" firstStartedPulling="2025-01-30 05:28:28.501612225 +0000 UTC m=+5.640463193" lastFinishedPulling="2025-01-30 05:28:30.798024066 +0000 UTC m=+7.936875033" observedRunningTime="2025-01-30 05:28:31.084936935 +0000 UTC m=+8.223787952" watchObservedRunningTime="2025-01-30 05:28:31.097968405 +0000 UTC m=+8.236819402" Jan 30 05:28:34.021968 kubelet[2720]: W0130 05:28:34.021882 2720 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-0-d-6ba27b8de2" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-0-d-6ba27b8de2' and this object Jan 30 05:28:34.024317 systemd[1]: Created slice kubepods-besteffort-podaebd4ecb_e91a_4347_8132_bbb686ff654d.slice - libcontainer container kubepods-besteffort-podaebd4ecb_e91a_4347_8132_bbb686ff654d.slice. Jan 30 05:28:34.026071 kubelet[2720]: E0130 05:28:34.025866 2720 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081-3-0-d-6ba27b8de2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-0-d-6ba27b8de2' and this object" logger="UnhandledError" Jan 30 05:28:34.046128 systemd[1]: Created slice kubepods-besteffort-pod89210e16_1b6d_48cd_9fcc_c58fd591da25.slice - libcontainer container kubepods-besteffort-pod89210e16_1b6d_48cd_9fcc_c58fd591da25.slice. Jan 30 05:28:34.064796 kubelet[2720]: I0130 05:28:34.064759 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-flexvol-driver-host\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.064796 kubelet[2720]: I0130 05:28:34.064795 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-lib-modules\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065085 kubelet[2720]: I0130 05:28:34.064809 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-var-lib-calico\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065085 kubelet[2720]: I0130 05:28:34.064822 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-cni-bin-dir\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065085 kubelet[2720]: I0130 05:28:34.064838 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g22t8\" (UniqueName: \"kubernetes.io/projected/89210e16-1b6d-48cd-9fcc-c58fd591da25-kube-api-access-g22t8\") pod \"calico-typha-6f97475579-jpnbt\" (UID: \"89210e16-1b6d-48cd-9fcc-c58fd591da25\") " pod="calico-system/calico-typha-6f97475579-jpnbt" Jan 30 05:28:34.065085 kubelet[2720]: I0130 05:28:34.064853 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aebd4ecb-e91a-4347-8132-bbb686ff654d-node-certs\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065085 kubelet[2720]: I0130 05:28:34.064868 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-cni-log-dir\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065200 kubelet[2720]: I0130 05:28:34.064905 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmg8f\" (UniqueName: \"kubernetes.io/projected/aebd4ecb-e91a-4347-8132-bbb686ff654d-kube-api-access-fmg8f\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065200 kubelet[2720]: I0130 05:28:34.064920 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89210e16-1b6d-48cd-9fcc-c58fd591da25-tigera-ca-bundle\") pod \"calico-typha-6f97475579-jpnbt\" (UID: \"89210e16-1b6d-48cd-9fcc-c58fd591da25\") " pod="calico-system/calico-typha-6f97475579-jpnbt" Jan 30 05:28:34.065200 kubelet[2720]: I0130 05:28:34.064937 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-xtables-lock\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065200 kubelet[2720]: I0130 05:28:34.064951 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-policysync\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065200 kubelet[2720]: I0130 05:28:34.064965 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aebd4ecb-e91a-4347-8132-bbb686ff654d-tigera-ca-bundle\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065314 kubelet[2720]: I0130 05:28:34.064982 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-var-run-calico\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065314 kubelet[2720]: I0130 05:28:34.064997 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aebd4ecb-e91a-4347-8132-bbb686ff654d-cni-net-dir\") pod \"calico-node-4tsk6\" (UID: \"aebd4ecb-e91a-4347-8132-bbb686ff654d\") " pod="calico-system/calico-node-4tsk6" Jan 30 05:28:34.065314 kubelet[2720]: I0130 05:28:34.065010 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/89210e16-1b6d-48cd-9fcc-c58fd591da25-typha-certs\") pod \"calico-typha-6f97475579-jpnbt\" (UID: \"89210e16-1b6d-48cd-9fcc-c58fd591da25\") " pod="calico-system/calico-typha-6f97475579-jpnbt" Jan 30 05:28:34.109311 kubelet[2720]: E0130 05:28:34.108495 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:34.166280 kubelet[2720]: I0130 05:28:34.166232 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/89d6eda1-89d1-46d7-9c9a-f40abee39703-socket-dir\") pod \"csi-node-driver-rlwbr\" (UID: \"89d6eda1-89d1-46d7-9c9a-f40abee39703\") " pod="calico-system/csi-node-driver-rlwbr" Jan 30 05:28:34.166433 kubelet[2720]: I0130 05:28:34.166344 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/89d6eda1-89d1-46d7-9c9a-f40abee39703-varrun\") pod \"csi-node-driver-rlwbr\" (UID: \"89d6eda1-89d1-46d7-9c9a-f40abee39703\") " pod="calico-system/csi-node-driver-rlwbr" Jan 30 05:28:34.166433 kubelet[2720]: I0130 05:28:34.166404 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89d6eda1-89d1-46d7-9c9a-f40abee39703-kubelet-dir\") pod \"csi-node-driver-rlwbr\" (UID: \"89d6eda1-89d1-46d7-9c9a-f40abee39703\") " pod="calico-system/csi-node-driver-rlwbr" Jan 30 05:28:34.166493 kubelet[2720]: I0130 05:28:34.166435 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/89d6eda1-89d1-46d7-9c9a-f40abee39703-registration-dir\") pod \"csi-node-driver-rlwbr\" (UID: \"89d6eda1-89d1-46d7-9c9a-f40abee39703\") " pod="calico-system/csi-node-driver-rlwbr" Jan 30 05:28:34.166493 kubelet[2720]: I0130 05:28:34.166454 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgg5\" (UniqueName: \"kubernetes.io/projected/89d6eda1-89d1-46d7-9c9a-f40abee39703-kube-api-access-vjgg5\") pod \"csi-node-driver-rlwbr\" (UID: \"89d6eda1-89d1-46d7-9c9a-f40abee39703\") " pod="calico-system/csi-node-driver-rlwbr" Jan 30 05:28:34.180322 kubelet[2720]: E0130 05:28:34.178213 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.180322 kubelet[2720]: W0130 05:28:34.178245 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.180322 kubelet[2720]: E0130 05:28:34.180303 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.180322 kubelet[2720]: W0130 05:28:34.180315 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.180509 kubelet[2720]: E0130 05:28:34.180331 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.181343 kubelet[2720]: E0130 05:28:34.180978 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.181343 kubelet[2720]: W0130 05:28:34.181099 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.181343 kubelet[2720]: E0130 05:28:34.181139 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.183947 kubelet[2720]: E0130 05:28:34.181676 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.183947 kubelet[2720]: W0130 05:28:34.181689 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.183947 kubelet[2720]: E0130 05:28:34.181700 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.183947 kubelet[2720]: E0130 05:28:34.182024 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.183947 kubelet[2720]: W0130 05:28:34.182033 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.183947 kubelet[2720]: E0130 05:28:34.182043 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.183947 kubelet[2720]: E0130 05:28:34.182457 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.183947 kubelet[2720]: W0130 05:28:34.182465 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.183947 kubelet[2720]: E0130 05:28:34.182475 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.184882 kubelet[2720]: E0130 05:28:34.184070 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.184882 kubelet[2720]: W0130 05:28:34.184079 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.184882 kubelet[2720]: E0130 05:28:34.184089 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.184882 kubelet[2720]: E0130 05:28:34.184629 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.184882 kubelet[2720]: W0130 05:28:34.184642 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.184882 kubelet[2720]: E0130 05:28:34.184682 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.184882 kubelet[2720]: E0130 05:28:34.184880 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.185058 kubelet[2720]: W0130 05:28:34.184909 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.185058 kubelet[2720]: E0130 05:28:34.184918 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.185106 kubelet[2720]: E0130 05:28:34.185075 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.185106 kubelet[2720]: W0130 05:28:34.185082 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.185106 kubelet[2720]: E0130 05:28:34.185091 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.185351 kubelet[2720]: E0130 05:28:34.185236 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.185351 kubelet[2720]: W0130 05:28:34.185245 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.185351 kubelet[2720]: E0130 05:28:34.185252 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.185351 kubelet[2720]: E0130 05:28:34.185318 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.185983 kubelet[2720]: E0130 05:28:34.185406 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.185983 kubelet[2720]: W0130 05:28:34.185412 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.185983 kubelet[2720]: E0130 05:28:34.185420 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.185983 kubelet[2720]: E0130 05:28:34.185971 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.185983 kubelet[2720]: W0130 05:28:34.185982 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.186096 kubelet[2720]: E0130 05:28:34.186031 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.186820 kubelet[2720]: E0130 05:28:34.186623 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.186820 kubelet[2720]: W0130 05:28:34.186640 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.186820 kubelet[2720]: E0130 05:28:34.186652 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.187093 kubelet[2720]: E0130 05:28:34.187074 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.187093 kubelet[2720]: W0130 05:28:34.187087 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.187259 kubelet[2720]: E0130 05:28:34.187218 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.187571 kubelet[2720]: E0130 05:28:34.187555 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.187625 kubelet[2720]: W0130 05:28:34.187587 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.187818 kubelet[2720]: E0130 05:28:34.187789 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.187980 kubelet[2720]: E0130 05:28:34.187965 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.187980 kubelet[2720]: W0130 05:28:34.187977 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.188118 kubelet[2720]: E0130 05:28:34.188101 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.188429 kubelet[2720]: E0130 05:28:34.188401 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.188429 kubelet[2720]: W0130 05:28:34.188413 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.188429 kubelet[2720]: E0130 05:28:34.188426 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.189582 kubelet[2720]: E0130 05:28:34.189569 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.189767 kubelet[2720]: W0130 05:28:34.189646 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.189767 kubelet[2720]: E0130 05:28:34.189660 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.190915 kubelet[2720]: E0130 05:28:34.190591 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.190915 kubelet[2720]: W0130 05:28:34.190603 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.190915 kubelet[2720]: E0130 05:28:34.190613 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.191956 kubelet[2720]: E0130 05:28:34.191942 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.192044 kubelet[2720]: W0130 05:28:34.192033 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.192115 kubelet[2720]: E0130 05:28:34.192086 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.194156 kubelet[2720]: E0130 05:28:34.194134 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.194156 kubelet[2720]: W0130 05:28:34.194147 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.194156 kubelet[2720]: E0130 05:28:34.194158 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.197957 kubelet[2720]: E0130 05:28:34.197940 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.197957 kubelet[2720]: W0130 05:28:34.197955 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.198029 kubelet[2720]: E0130 05:28:34.197966 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.209576 kubelet[2720]: E0130 05:28:34.209548 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.209576 kubelet[2720]: W0130 05:28:34.209570 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.209736 kubelet[2720]: E0130 05:28:34.209588 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.267346 kubelet[2720]: E0130 05:28:34.267157 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.267346 kubelet[2720]: W0130 05:28:34.267216 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.267346 kubelet[2720]: E0130 05:28:34.267249 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.267510 kubelet[2720]: E0130 05:28:34.267497 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.267510 kubelet[2720]: W0130 05:28:34.267505 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.267645 kubelet[2720]: E0130 05:28:34.267536 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.267996 kubelet[2720]: E0130 05:28:34.267779 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.267996 kubelet[2720]: W0130 05:28:34.267811 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.267996 kubelet[2720]: E0130 05:28:34.267827 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.268117 kubelet[2720]: E0130 05:28:34.268095 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.268117 kubelet[2720]: W0130 05:28:34.268109 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.268176 kubelet[2720]: E0130 05:28:34.268121 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.268368 kubelet[2720]: E0130 05:28:34.268348 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.268368 kubelet[2720]: W0130 05:28:34.268362 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.268621 kubelet[2720]: E0130 05:28:34.268376 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.268674 kubelet[2720]: E0130 05:28:34.268619 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.268674 kubelet[2720]: W0130 05:28:34.268629 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.268674 kubelet[2720]: E0130 05:28:34.268647 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.268983 kubelet[2720]: E0130 05:28:34.268966 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.268983 kubelet[2720]: W0130 05:28:34.268981 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.269077 kubelet[2720]: E0130 05:28:34.269067 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.269378 kubelet[2720]: E0130 05:28:34.269304 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.269378 kubelet[2720]: W0130 05:28:34.269314 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.269436 kubelet[2720]: E0130 05:28:34.269409 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.269662 kubelet[2720]: E0130 05:28:34.269526 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.269662 kubelet[2720]: W0130 05:28:34.269536 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.269662 kubelet[2720]: E0130 05:28:34.269607 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.269750 kubelet[2720]: E0130 05:28:34.269722 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.269750 kubelet[2720]: W0130 05:28:34.269729 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.269932 kubelet[2720]: E0130 05:28:34.269842 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.270076 kubelet[2720]: E0130 05:28:34.270010 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.270076 kubelet[2720]: W0130 05:28:34.270019 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.270160 kubelet[2720]: E0130 05:28:34.270130 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.270355 kubelet[2720]: E0130 05:28:34.270263 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.270355 kubelet[2720]: W0130 05:28:34.270273 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.270355 kubelet[2720]: E0130 05:28:34.270284 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.270601 kubelet[2720]: E0130 05:28:34.270502 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.270601 kubelet[2720]: W0130 05:28:34.270510 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.270601 kubelet[2720]: E0130 05:28:34.270540 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.270929 kubelet[2720]: E0130 05:28:34.270799 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.270929 kubelet[2720]: W0130 05:28:34.270809 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.270929 kubelet[2720]: E0130 05:28:34.270915 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.271328 kubelet[2720]: E0130 05:28:34.271309 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.271328 kubelet[2720]: W0130 05:28:34.271322 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.271493 kubelet[2720]: E0130 05:28:34.271412 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.271555 kubelet[2720]: E0130 05:28:34.271543 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.271597 kubelet[2720]: W0130 05:28:34.271554 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.271700 kubelet[2720]: E0130 05:28:34.271651 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.271855 kubelet[2720]: E0130 05:28:34.271834 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.271855 kubelet[2720]: W0130 05:28:34.271850 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.271950 kubelet[2720]: E0130 05:28:34.271906 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.272584 kubelet[2720]: E0130 05:28:34.272260 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.272584 kubelet[2720]: W0130 05:28:34.272274 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.272584 kubelet[2720]: E0130 05:28:34.272290 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.272584 kubelet[2720]: E0130 05:28:34.272530 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.272584 kubelet[2720]: W0130 05:28:34.272538 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.272584 kubelet[2720]: E0130 05:28:34.272560 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.272801 kubelet[2720]: E0130 05:28:34.272781 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.272801 kubelet[2720]: W0130 05:28:34.272795 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.272864 kubelet[2720]: E0130 05:28:34.272814 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.273329 kubelet[2720]: E0130 05:28:34.273217 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.273329 kubelet[2720]: W0130 05:28:34.273228 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.274023 kubelet[2720]: E0130 05:28:34.274004 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.275074 kubelet[2720]: E0130 05:28:34.275052 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.275074 kubelet[2720]: W0130 05:28:34.275070 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.275148 kubelet[2720]: E0130 05:28:34.275118 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.275388 kubelet[2720]: E0130 05:28:34.275372 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.275388 kubelet[2720]: W0130 05:28:34.275386 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.275454 kubelet[2720]: E0130 05:28:34.275399 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.278083 kubelet[2720]: E0130 05:28:34.276169 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.278083 kubelet[2720]: W0130 05:28:34.276180 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.278083 kubelet[2720]: E0130 05:28:34.276189 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.278083 kubelet[2720]: E0130 05:28:34.278041 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.278083 kubelet[2720]: W0130 05:28:34.278050 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.278083 kubelet[2720]: E0130 05:28:34.278060 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.287457 kubelet[2720]: E0130 05:28:34.287430 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:34.287457 kubelet[2720]: W0130 05:28:34.287447 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:34.287570 kubelet[2720]: E0130 05:28:34.287465 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:34.340855 containerd[1502]: time="2025-01-30T05:28:34.340798032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tsk6,Uid:aebd4ecb-e91a-4347-8132-bbb686ff654d,Namespace:calico-system,Attempt:0,}" Jan 30 05:28:34.380245 containerd[1502]: time="2025-01-30T05:28:34.380134794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:34.381290 containerd[1502]: time="2025-01-30T05:28:34.381245843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:34.381751 containerd[1502]: time="2025-01-30T05:28:34.381430248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:34.381857 containerd[1502]: time="2025-01-30T05:28:34.381700783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:34.408300 systemd[1]: Started cri-containerd-fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea.scope - libcontainer container fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea. Jan 30 05:28:34.437435 containerd[1502]: time="2025-01-30T05:28:34.436985110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tsk6,Uid:aebd4ecb-e91a-4347-8132-bbb686ff654d,Namespace:calico-system,Attempt:0,} returns sandbox id \"fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea\"" Jan 30 05:28:34.439423 containerd[1502]: time="2025-01-30T05:28:34.439230771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 05:28:35.173864 kubelet[2720]: E0130 05:28:35.171951 2720 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 05:28:35.173864 kubelet[2720]: E0130 05:28:35.172102 2720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89210e16-1b6d-48cd-9fcc-c58fd591da25-typha-certs podName:89210e16-1b6d-48cd-9fcc-c58fd591da25 nodeName:}" failed. No retries permitted until 2025-01-30 05:28:35.672067476 +0000 UTC m=+12.810918473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/89210e16-1b6d-48cd-9fcc-c58fd591da25-typha-certs") pod "calico-typha-6f97475579-jpnbt" (UID: "89210e16-1b6d-48cd-9fcc-c58fd591da25") : failed to sync secret cache: timed out waiting for the condition Jan 30 05:28:35.180216 kubelet[2720]: E0130 05:28:35.179275 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.180216 kubelet[2720]: W0130 05:28:35.179305 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.180216 kubelet[2720]: E0130 05:28:35.179335 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.280274 kubelet[2720]: E0130 05:28:35.280229 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.280274 kubelet[2720]: W0130 05:28:35.280266 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.280606 kubelet[2720]: E0130 05:28:35.280297 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.382289 kubelet[2720]: E0130 05:28:35.382190 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.382289 kubelet[2720]: W0130 05:28:35.382231 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.382289 kubelet[2720]: E0130 05:28:35.382268 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.483742 kubelet[2720]: E0130 05:28:35.483587 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.483742 kubelet[2720]: W0130 05:28:35.483625 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.483742 kubelet[2720]: E0130 05:28:35.483658 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.585274 kubelet[2720]: E0130 05:28:35.585230 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.585548 kubelet[2720]: W0130 05:28:35.585467 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.585548 kubelet[2720]: E0130 05:28:35.585543 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.686959 kubelet[2720]: E0130 05:28:35.686854 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.686959 kubelet[2720]: W0130 05:28:35.686918 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.686959 kubelet[2720]: E0130 05:28:35.686951 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.687935 kubelet[2720]: E0130 05:28:35.687720 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.687935 kubelet[2720]: W0130 05:28:35.687746 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.687935 kubelet[2720]: E0130 05:28:35.687768 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.688305 kubelet[2720]: E0130 05:28:35.688273 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.688305 kubelet[2720]: W0130 05:28:35.688296 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.688437 kubelet[2720]: E0130 05:28:35.688314 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.689076 kubelet[2720]: E0130 05:28:35.688818 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.689076 kubelet[2720]: W0130 05:28:35.688844 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.689076 kubelet[2720]: E0130 05:28:35.688865 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.689434 kubelet[2720]: E0130 05:28:35.689377 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.689434 kubelet[2720]: W0130 05:28:35.689400 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.689434 kubelet[2720]: E0130 05:28:35.689421 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.700277 kubelet[2720]: E0130 05:28:35.700126 2720 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:28:35.700277 kubelet[2720]: W0130 05:28:35.700164 2720 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:28:35.700277 kubelet[2720]: E0130 05:28:35.700194 2720 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:28:35.850200 containerd[1502]: time="2025-01-30T05:28:35.850067890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f97475579-jpnbt,Uid:89210e16-1b6d-48cd-9fcc-c58fd591da25,Namespace:calico-system,Attempt:0,}" Jan 30 05:28:35.907209 containerd[1502]: time="2025-01-30T05:28:35.907012091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:28:35.908065 containerd[1502]: time="2025-01-30T05:28:35.907848577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:28:35.908217 containerd[1502]: time="2025-01-30T05:28:35.908035948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:35.909074 containerd[1502]: time="2025-01-30T05:28:35.908945862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:28:35.959265 systemd[1]: Started cri-containerd-6bec05a62be65cb59085b23b7425eb650dda9ec1791365aeecc0b57edb03aeb9.scope - libcontainer container 6bec05a62be65cb59085b23b7425eb650dda9ec1791365aeecc0b57edb03aeb9. Jan 30 05:28:35.993271 kubelet[2720]: E0130 05:28:35.992226 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:36.038186 containerd[1502]: time="2025-01-30T05:28:36.038137372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f97475579-jpnbt,Uid:89210e16-1b6d-48cd-9fcc-c58fd591da25,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bec05a62be65cb59085b23b7425eb650dda9ec1791365aeecc0b57edb03aeb9\"" Jan 30 05:28:36.078421 update_engine[1474]: I20250130 05:28:36.078346 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:28:36.078973 update_engine[1474]: I20250130 05:28:36.078652 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:28:36.078973 update_engine[1474]: I20250130 05:28:36.078869 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:28:36.079684 update_engine[1474]: E20250130 05:28:36.079629 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:28:36.079750 update_engine[1474]: I20250130 05:28:36.079686 1474 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 05:28:36.079750 update_engine[1474]: I20250130 05:28:36.079699 1474 omaha_request_action.cc:617] Omaha request response: Jan 30 05:28:36.079825 update_engine[1474]: E20250130 05:28:36.079799 1474 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 05:28:36.079863 update_engine[1474]: I20250130 05:28:36.079838 1474 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 05:28:36.079863 update_engine[1474]: I20250130 05:28:36.079850 1474 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 05:28:36.080231 update_engine[1474]: I20250130 05:28:36.079859 1474 update_attempter.cc:306] Processing Done. Jan 30 05:28:36.080231 update_engine[1474]: E20250130 05:28:36.079878 1474 update_attempter.cc:619] Update failed. Jan 30 05:28:36.080231 update_engine[1474]: I20250130 05:28:36.079918 1474 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 05:28:36.080231 update_engine[1474]: I20250130 05:28:36.079933 1474 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 05:28:36.080231 update_engine[1474]: I20250130 05:28:36.079947 1474 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 05:28:36.080231 update_engine[1474]: I20250130 05:28:36.080044 1474 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 05:28:36.080231 update_engine[1474]: I20250130 05:28:36.080071 1474 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 05:28:36.080231 update_engine[1474]: I20250130 05:28:36.080082 1474 omaha_request_action.cc:272] Request: Jan 30 05:28:36.080231 update_engine[1474]: Jan 30 05:28:36.080231 update_engine[1474]: Jan 30 05:28:36.080231 update_engine[1474]: Jan 30 05:28:36.080231 update_engine[1474]: Jan 30 05:28:36.080231 update_engine[1474]: Jan 30 05:28:36.080231 update_engine[1474]: Jan 30 05:28:36.080231 update_engine[1474]: I20250130 05:28:36.080092 1474 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 05:28:36.080737 update_engine[1474]: I20250130 05:28:36.080297 1474 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 05:28:36.080737 update_engine[1474]: I20250130 05:28:36.080508 1474 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 05:28:36.081279 update_engine[1474]: E20250130 05:28:36.081225 1474 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 05:28:36.081344 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 05:28:36.081843 update_engine[1474]: I20250130 05:28:36.081276 1474 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 05:28:36.081843 update_engine[1474]: I20250130 05:28:36.081288 1474 omaha_request_action.cc:617] Omaha request response: Jan 30 05:28:36.081843 update_engine[1474]: I20250130 05:28:36.081300 1474 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 05:28:36.081843 update_engine[1474]: I20250130 05:28:36.081308 1474 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 05:28:36.081843 update_engine[1474]: I20250130 05:28:36.081317 1474 update_attempter.cc:306] Processing Done. Jan 30 05:28:36.081843 update_engine[1474]: I20250130 05:28:36.081328 1474 update_attempter.cc:310] Error event sent. Jan 30 05:28:36.081843 update_engine[1474]: I20250130 05:28:36.081341 1474 update_check_scheduler.cc:74] Next update check in 42m50s Jan 30 05:28:36.082092 locksmithd[1511]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 05:28:37.993245 kubelet[2720]: E0130 05:28:37.992484 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:39.991945 kubelet[2720]: E0130 05:28:39.991835 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:41.992373 kubelet[2720]: E0130 05:28:41.992259 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:43.992510 kubelet[2720]: E0130 05:28:43.992402 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:45.636946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943225373.mount: Deactivated successfully. Jan 30 05:28:45.766374 containerd[1502]: time="2025-01-30T05:28:45.766310921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:45.767734 containerd[1502]: time="2025-01-30T05:28:45.767687441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 05:28:45.769240 containerd[1502]: time="2025-01-30T05:28:45.769201160Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:45.772545 containerd[1502]: time="2025-01-30T05:28:45.772498157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:45.773497 containerd[1502]: time="2025-01-30T05:28:45.773449794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 11.334189267s" Jan 30 05:28:45.773540 containerd[1502]: time="2025-01-30T05:28:45.773502254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 05:28:45.776363 containerd[1502]: time="2025-01-30T05:28:45.775813579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 05:28:45.776431 containerd[1502]: time="2025-01-30T05:28:45.776327750Z" level=info msg="CreateContainer within sandbox \"fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 05:28:45.794806 containerd[1502]: time="2025-01-30T05:28:45.794755337Z" level=info msg="CreateContainer within sandbox \"fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799\"" Jan 30 05:28:45.795562 containerd[1502]: time="2025-01-30T05:28:45.795416897Z" level=info msg="StartContainer for \"fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799\"" Jan 30 05:28:45.831014 systemd[1]: Started cri-containerd-fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799.scope - libcontainer container fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799. Jan 30 05:28:45.867548 containerd[1502]: time="2025-01-30T05:28:45.867487236Z" level=info msg="StartContainer for \"fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799\" returns successfully" Jan 30 05:28:45.884009 systemd[1]: cri-containerd-fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799.scope: Deactivated successfully. Jan 30 05:28:45.947712 containerd[1502]: time="2025-01-30T05:28:45.947405581Z" level=info msg="shim disconnected" id=fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799 namespace=k8s.io Jan 30 05:28:45.947712 containerd[1502]: time="2025-01-30T05:28:45.947479129Z" level=warning msg="cleaning up after shim disconnected" id=fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799 namespace=k8s.io Jan 30 05:28:45.947712 containerd[1502]: time="2025-01-30T05:28:45.947490971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:28:45.992384 kubelet[2720]: E0130 05:28:45.992268 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:46.577044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcf91fd8a8b8aa1340fc68bdcbd44829f0f94ce6bf68581d853528661a8c8799-rootfs.mount: Deactivated successfully. Jan 30 05:28:47.991664 kubelet[2720]: E0130 05:28:47.991599 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:48.170445 containerd[1502]: time="2025-01-30T05:28:48.170357760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:48.171713 containerd[1502]: time="2025-01-30T05:28:48.171630849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 05:28:48.172823 containerd[1502]: time="2025-01-30T05:28:48.172769113Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:48.207919 containerd[1502]: time="2025-01-30T05:28:48.206768767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:48.209018 containerd[1502]: time="2025-01-30T05:28:48.208969953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.433122219s" Jan 30 05:28:48.209018 containerd[1502]: time="2025-01-30T05:28:48.209008937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 05:28:48.211301 containerd[1502]: time="2025-01-30T05:28:48.211218629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 05:28:48.229145 containerd[1502]: time="2025-01-30T05:28:48.229101437Z" level=info msg="CreateContainer within sandbox \"6bec05a62be65cb59085b23b7425eb650dda9ec1791365aeecc0b57edb03aeb9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 05:28:48.256177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288825380.mount: Deactivated successfully. Jan 30 05:28:48.262816 containerd[1502]: time="2025-01-30T05:28:48.262764384Z" level=info msg="CreateContainer within sandbox \"6bec05a62be65cb59085b23b7425eb650dda9ec1791365aeecc0b57edb03aeb9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c067c1b3b78a371110a8ef307c965ff8948aa60e38a3be9e13b91729429e6ebb\"" Jan 30 05:28:48.263611 containerd[1502]: time="2025-01-30T05:28:48.263420936Z" level=info msg="StartContainer for \"c067c1b3b78a371110a8ef307c965ff8948aa60e38a3be9e13b91729429e6ebb\"" Jan 30 05:28:48.326109 systemd[1]: Started cri-containerd-c067c1b3b78a371110a8ef307c965ff8948aa60e38a3be9e13b91729429e6ebb.scope - libcontainer container c067c1b3b78a371110a8ef307c965ff8948aa60e38a3be9e13b91729429e6ebb. Jan 30 05:28:48.379391 containerd[1502]: time="2025-01-30T05:28:48.379067228Z" level=info msg="StartContainer for \"c067c1b3b78a371110a8ef307c965ff8948aa60e38a3be9e13b91729429e6ebb\" returns successfully" Jan 30 05:28:49.992602 kubelet[2720]: E0130 05:28:49.992397 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:50.123570 kubelet[2720]: I0130 05:28:50.123505 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:28:51.999464 kubelet[2720]: E0130 05:28:51.999385 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:53.070157 containerd[1502]: time="2025-01-30T05:28:53.070108585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:53.071513 containerd[1502]: time="2025-01-30T05:28:53.071475119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 05:28:53.072634 containerd[1502]: time="2025-01-30T05:28:53.072595907Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:53.074927 containerd[1502]: time="2025-01-30T05:28:53.074903477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:28:53.075806 containerd[1502]: time="2025-01-30T05:28:53.075422613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.861900833s" Jan 30 05:28:53.075806 containerd[1502]: time="2025-01-30T05:28:53.075448382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 05:28:53.077398 containerd[1502]: time="2025-01-30T05:28:53.077379128Z" level=info msg="CreateContainer within sandbox \"fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 05:28:53.106923 containerd[1502]: time="2025-01-30T05:28:53.106841623Z" level=info msg="CreateContainer within sandbox \"fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12\"" Jan 30 05:28:53.107547 containerd[1502]: time="2025-01-30T05:28:53.107421774Z" level=info msg="StartContainer for \"ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12\"" Jan 30 05:28:53.228016 systemd[1]: Started cri-containerd-ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12.scope - libcontainer container ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12. Jan 30 05:28:53.276679 containerd[1502]: time="2025-01-30T05:28:53.276630190Z" level=info msg="StartContainer for \"ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12\" returns successfully" Jan 30 05:28:53.802245 systemd[1]: cri-containerd-ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12.scope: Deactivated successfully. Jan 30 05:28:53.866661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12-rootfs.mount: Deactivated successfully. Jan 30 05:28:53.875093 kubelet[2720]: I0130 05:28:53.875054 2720 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 05:28:53.935863 kubelet[2720]: I0130 05:28:53.933818 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f97475579-jpnbt" podStartSLOduration=8.762641835 podStartE2EDuration="20.93379552s" podCreationTimestamp="2025-01-30 05:28:33 +0000 UTC" firstStartedPulling="2025-01-30 05:28:36.039611226 +0000 UTC m=+13.178462183" lastFinishedPulling="2025-01-30 05:28:48.210764901 +0000 UTC m=+25.349615868" observedRunningTime="2025-01-30 05:28:49.143644486 +0000 UTC m=+26.282495483" watchObservedRunningTime="2025-01-30 05:28:53.93379552 +0000 UTC m=+31.072646487" Jan 30 05:28:53.963915 containerd[1502]: time="2025-01-30T05:28:53.963821014Z" level=info msg="shim disconnected" id=ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12 namespace=k8s.io Jan 30 05:28:53.964400 containerd[1502]: time="2025-01-30T05:28:53.964165327Z" level=warning msg="cleaning up after shim disconnected" id=ccd642253b57ab98b56121e14959988a451787a8c006861500fa871c976c5a12 namespace=k8s.io Jan 30 05:28:53.964400 containerd[1502]: time="2025-01-30T05:28:53.964180276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:28:53.968126 systemd[1]: Created slice kubepods-burstable-pod4f8dbe13_01dc_4a7e_bada_6efa483db38e.slice - libcontainer container kubepods-burstable-pod4f8dbe13_01dc_4a7e_bada_6efa483db38e.slice. Jan 30 05:28:53.983367 systemd[1]: Created slice kubepods-besteffort-pod3a77443b_37f7_4875_b9d5_748a91d1aa99.slice - libcontainer container kubepods-besteffort-pod3a77443b_37f7_4875_b9d5_748a91d1aa99.slice. Jan 30 05:28:53.987971 containerd[1502]: time="2025-01-30T05:28:53.987826919Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:28:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:28:54.001957 systemd[1]: Created slice kubepods-besteffort-podbad96a0b_b018_45a1_96af_9edbd5119f12.slice - libcontainer container kubepods-besteffort-podbad96a0b_b018_45a1_96af_9edbd5119f12.slice. Jan 30 05:28:54.011536 systemd[1]: Created slice kubepods-burstable-pod69bf0978_c832_4b9d_bcaa_b4229f459b1a.slice - libcontainer container kubepods-burstable-pod69bf0978_c832_4b9d_bcaa_b4229f459b1a.slice. Jan 30 05:28:54.026325 systemd[1]: Created slice kubepods-besteffort-podd4912169_2df4_466e_ac9f_4416a8f727db.slice - libcontainer container kubepods-besteffort-podd4912169_2df4_466e_ac9f_4416a8f727db.slice. Jan 30 05:28:54.030345 systemd[1]: Created slice kubepods-besteffort-pod89d6eda1_89d1_46d7_9c9a_f40abee39703.slice - libcontainer container kubepods-besteffort-pod89d6eda1_89d1_46d7_9c9a_f40abee39703.slice. Jan 30 05:28:54.035299 containerd[1502]: time="2025-01-30T05:28:54.035147285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rlwbr,Uid:89d6eda1-89d1-46d7-9c9a-f40abee39703,Namespace:calico-system,Attempt:0,}" Jan 30 05:28:54.047172 kubelet[2720]: I0130 05:28:54.047092 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qzrh\" (UniqueName: \"kubernetes.io/projected/3a77443b-37f7-4875-b9d5-748a91d1aa99-kube-api-access-2qzrh\") pod \"calico-apiserver-5c5fbd8b55-c7cnm\" (UID: \"3a77443b-37f7-4875-b9d5-748a91d1aa99\") " pod="calico-apiserver/calico-apiserver-5c5fbd8b55-c7cnm" Jan 30 05:28:54.047172 kubelet[2720]: I0130 05:28:54.047142 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bad96a0b-b018-45a1-96af-9edbd5119f12-tigera-ca-bundle\") pod \"calico-kube-controllers-5c7559cb77-j6lmn\" (UID: \"bad96a0b-b018-45a1-96af-9edbd5119f12\") " pod="calico-system/calico-kube-controllers-5c7559cb77-j6lmn" Jan 30 05:28:54.047172 kubelet[2720]: I0130 05:28:54.047175 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69bf0978-c832-4b9d-bcaa-b4229f459b1a-config-volume\") pod \"coredns-668d6bf9bc-s695l\" (UID: \"69bf0978-c832-4b9d-bcaa-b4229f459b1a\") " pod="kube-system/coredns-668d6bf9bc-s695l" Jan 30 05:28:54.047692 kubelet[2720]: I0130 05:28:54.047213 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4912169-2df4-466e-ac9f-4416a8f727db-calico-apiserver-certs\") pod \"calico-apiserver-5c5fbd8b55-nbxvc\" (UID: \"d4912169-2df4-466e-ac9f-4416a8f727db\") " pod="calico-apiserver/calico-apiserver-5c5fbd8b55-nbxvc" Jan 30 05:28:54.047692 kubelet[2720]: I0130 05:28:54.047229 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctk9q\" (UniqueName: \"kubernetes.io/projected/69bf0978-c832-4b9d-bcaa-b4229f459b1a-kube-api-access-ctk9q\") pod \"coredns-668d6bf9bc-s695l\" (UID: \"69bf0978-c832-4b9d-bcaa-b4229f459b1a\") " pod="kube-system/coredns-668d6bf9bc-s695l" Jan 30 05:28:54.047692 kubelet[2720]: I0130 05:28:54.047245 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9jn\" (UniqueName: \"kubernetes.io/projected/4f8dbe13-01dc-4a7e-bada-6efa483db38e-kube-api-access-jt9jn\") pod \"coredns-668d6bf9bc-qz74f\" (UID: \"4f8dbe13-01dc-4a7e-bada-6efa483db38e\") " pod="kube-system/coredns-668d6bf9bc-qz74f" Jan 30 05:28:54.047692 kubelet[2720]: I0130 05:28:54.047260 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a77443b-37f7-4875-b9d5-748a91d1aa99-calico-apiserver-certs\") pod \"calico-apiserver-5c5fbd8b55-c7cnm\" (UID: \"3a77443b-37f7-4875-b9d5-748a91d1aa99\") " pod="calico-apiserver/calico-apiserver-5c5fbd8b55-c7cnm" Jan 30 05:28:54.047692 kubelet[2720]: I0130 05:28:54.047278 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2sg\" (UniqueName: \"kubernetes.io/projected/bad96a0b-b018-45a1-96af-9edbd5119f12-kube-api-access-8g2sg\") pod \"calico-kube-controllers-5c7559cb77-j6lmn\" (UID: \"bad96a0b-b018-45a1-96af-9edbd5119f12\") " pod="calico-system/calico-kube-controllers-5c7559cb77-j6lmn" Jan 30 05:28:54.047809 kubelet[2720]: I0130 05:28:54.047295 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dcc\" (UniqueName: \"kubernetes.io/projected/d4912169-2df4-466e-ac9f-4416a8f727db-kube-api-access-j5dcc\") pod \"calico-apiserver-5c5fbd8b55-nbxvc\" (UID: \"d4912169-2df4-466e-ac9f-4416a8f727db\") " pod="calico-apiserver/calico-apiserver-5c5fbd8b55-nbxvc" Jan 30 05:28:54.047809 kubelet[2720]: I0130 05:28:54.047308 2720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f8dbe13-01dc-4a7e-bada-6efa483db38e-config-volume\") pod \"coredns-668d6bf9bc-qz74f\" (UID: \"4f8dbe13-01dc-4a7e-bada-6efa483db38e\") " pod="kube-system/coredns-668d6bf9bc-qz74f" Jan 30 05:28:54.168978 containerd[1502]: time="2025-01-30T05:28:54.168830504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 05:28:54.280348 containerd[1502]: time="2025-01-30T05:28:54.280281201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qz74f,Uid:4f8dbe13-01dc-4a7e-bada-6efa483db38e,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:54.299048 containerd[1502]: time="2025-01-30T05:28:54.298810779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5fbd8b55-c7cnm,Uid:3a77443b-37f7-4875-b9d5-748a91d1aa99,Namespace:calico-apiserver,Attempt:0,}" Jan 30 05:28:54.316817 containerd[1502]: time="2025-01-30T05:28:54.316629948Z" level=error msg="Failed to destroy network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.318214 containerd[1502]: time="2025-01-30T05:28:54.317724547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7559cb77-j6lmn,Uid:bad96a0b-b018-45a1-96af-9edbd5119f12,Namespace:calico-system,Attempt:0,}" Jan 30 05:28:54.319436 containerd[1502]: time="2025-01-30T05:28:54.319417974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s695l,Uid:69bf0978-c832-4b9d-bcaa-b4229f459b1a,Namespace:kube-system,Attempt:0,}" Jan 30 05:28:54.323780 containerd[1502]: time="2025-01-30T05:28:54.323752147Z" level=error msg="encountered an error cleaning up failed sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.324041 containerd[1502]: time="2025-01-30T05:28:54.323922580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rlwbr,Uid:89d6eda1-89d1-46d7-9c9a-f40abee39703,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.326368 kubelet[2720]: E0130 05:28:54.326257 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.328778 kubelet[2720]: E0130 05:28:54.326449 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rlwbr" Jan 30 05:28:54.328778 kubelet[2720]: E0130 05:28:54.326582 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rlwbr" Jan 30 05:28:54.328778 kubelet[2720]: E0130 05:28:54.326698 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rlwbr_calico-system(89d6eda1-89d1-46d7-9c9a-f40abee39703)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rlwbr_calico-system(89d6eda1-89d1-46d7-9c9a-f40abee39703)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:54.335316 containerd[1502]: time="2025-01-30T05:28:54.335281097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5fbd8b55-nbxvc,Uid:d4912169-2df4-466e-ac9f-4416a8f727db,Namespace:calico-apiserver,Attempt:0,}" Jan 30 05:28:54.429952 containerd[1502]: time="2025-01-30T05:28:54.429600633Z" level=error msg="Failed to destroy network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.430661 containerd[1502]: time="2025-01-30T05:28:54.430631992Z" level=error msg="encountered an error cleaning up failed sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.430712 containerd[1502]: time="2025-01-30T05:28:54.430686676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qz74f,Uid:4f8dbe13-01dc-4a7e-bada-6efa483db38e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.431823 kubelet[2720]: E0130 05:28:54.430951 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.431823 kubelet[2720]: E0130 05:28:54.431015 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qz74f" Jan 30 05:28:54.431823 kubelet[2720]: E0130 05:28:54.431039 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qz74f" Jan 30 05:28:54.434087 kubelet[2720]: E0130 05:28:54.431086 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qz74f_kube-system(4f8dbe13-01dc-4a7e-bada-6efa483db38e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qz74f_kube-system(4f8dbe13-01dc-4a7e-bada-6efa483db38e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qz74f" podUID="4f8dbe13-01dc-4a7e-bada-6efa483db38e" Jan 30 05:28:54.489274 containerd[1502]: time="2025-01-30T05:28:54.489214263Z" level=error msg="Failed to destroy network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.489993 containerd[1502]: time="2025-01-30T05:28:54.489805035Z" level=error msg="encountered an error cleaning up failed sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.489993 containerd[1502]: time="2025-01-30T05:28:54.489879206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s695l,Uid:69bf0978-c832-4b9d-bcaa-b4229f459b1a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.490388 kubelet[2720]: E0130 05:28:54.490340 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.490461 kubelet[2720]: E0130 05:28:54.490407 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s695l" Jan 30 05:28:54.490673 kubelet[2720]: E0130 05:28:54.490642 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s695l" Jan 30 05:28:54.492808 kubelet[2720]: E0130 05:28:54.492749 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s695l_kube-system(69bf0978-c832-4b9d-bcaa-b4229f459b1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s695l_kube-system(69bf0978-c832-4b9d-bcaa-b4229f459b1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s695l" podUID="69bf0978-c832-4b9d-bcaa-b4229f459b1a" Jan 30 05:28:54.505749 containerd[1502]: time="2025-01-30T05:28:54.505706752Z" level=error msg="Failed to destroy network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.507075 containerd[1502]: time="2025-01-30T05:28:54.507050494Z" level=error msg="encountered an error cleaning up failed sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.507177 containerd[1502]: time="2025-01-30T05:28:54.507156345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7559cb77-j6lmn,Uid:bad96a0b-b018-45a1-96af-9edbd5119f12,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.507473 kubelet[2720]: E0130 05:28:54.507444 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.507603 kubelet[2720]: E0130 05:28:54.507582 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c7559cb77-j6lmn" Jan 30 05:28:54.507710 kubelet[2720]: E0130 05:28:54.507693 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c7559cb77-j6lmn" Jan 30 05:28:54.507881 kubelet[2720]: E0130 05:28:54.507836 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c7559cb77-j6lmn_calico-system(bad96a0b-b018-45a1-96af-9edbd5119f12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c7559cb77-j6lmn_calico-system(bad96a0b-b018-45a1-96af-9edbd5119f12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c7559cb77-j6lmn" podUID="bad96a0b-b018-45a1-96af-9edbd5119f12" Jan 30 05:28:54.510722 containerd[1502]: time="2025-01-30T05:28:54.510691751Z" level=error msg="Failed to destroy network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.511136 containerd[1502]: time="2025-01-30T05:28:54.511114635Z" level=error msg="encountered an error cleaning up failed sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.511232 containerd[1502]: time="2025-01-30T05:28:54.511212890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5fbd8b55-c7cnm,Uid:3a77443b-37f7-4875-b9d5-748a91d1aa99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.511630 kubelet[2720]: E0130 05:28:54.511605 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.511673 kubelet[2720]: E0130 05:28:54.511647 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-c7cnm" Jan 30 05:28:54.511673 kubelet[2720]: E0130 05:28:54.511664 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-c7cnm" Jan 30 05:28:54.511783 kubelet[2720]: E0130 05:28:54.511698 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c5fbd8b55-c7cnm_calico-apiserver(3a77443b-37f7-4875-b9d5-748a91d1aa99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c5fbd8b55-c7cnm_calico-apiserver(3a77443b-37f7-4875-b9d5-748a91d1aa99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-c7cnm" podUID="3a77443b-37f7-4875-b9d5-748a91d1aa99" Jan 30 05:28:54.533226 containerd[1502]: time="2025-01-30T05:28:54.533181241Z" level=error msg="Failed to destroy network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.533648 containerd[1502]: time="2025-01-30T05:28:54.533626667Z" level=error msg="encountered an error cleaning up failed sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.533755 containerd[1502]: time="2025-01-30T05:28:54.533731647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5fbd8b55-nbxvc,Uid:d4912169-2df4-466e-ac9f-4416a8f727db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.534115 kubelet[2720]: E0130 05:28:54.534003 2720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:54.534115 kubelet[2720]: E0130 05:28:54.534069 2720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-nbxvc" Jan 30 05:28:54.534115 kubelet[2720]: E0130 05:28:54.534090 2720 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-nbxvc" Jan 30 05:28:54.534371 kubelet[2720]: E0130 05:28:54.534136 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c5fbd8b55-nbxvc_calico-apiserver(d4912169-2df4-466e-ac9f-4416a8f727db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c5fbd8b55-nbxvc_calico-apiserver(d4912169-2df4-466e-ac9f-4416a8f727db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-nbxvc" podUID="d4912169-2df4-466e-ac9f-4416a8f727db" Jan 30 05:28:55.103143 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67-shm.mount: Deactivated successfully. Jan 30 05:28:55.167675 kubelet[2720]: I0130 05:28:55.167580 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:28:55.176773 kubelet[2720]: I0130 05:28:55.176548 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:28:55.180553 containerd[1502]: time="2025-01-30T05:28:55.180119443Z" level=info msg="StopPodSandbox for \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\"" Jan 30 05:28:55.186653 containerd[1502]: time="2025-01-30T05:28:55.186017563Z" level=info msg="Ensure that sandbox 68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe in task-service has been cleanup successfully" Jan 30 05:28:55.191978 kubelet[2720]: I0130 05:28:55.191852 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:28:55.194340 containerd[1502]: time="2025-01-30T05:28:55.191296125Z" level=info msg="StopPodSandbox for \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\"" Jan 30 05:28:55.195007 containerd[1502]: time="2025-01-30T05:28:55.194873524Z" level=info msg="Ensure that sandbox d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263 in task-service has been cleanup successfully" Jan 30 05:28:55.199520 containerd[1502]: time="2025-01-30T05:28:55.198802151Z" level=info msg="StopPodSandbox for \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\"" Jan 30 05:28:55.199520 containerd[1502]: time="2025-01-30T05:28:55.199317901Z" level=info msg="Ensure that sandbox ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c in task-service has been cleanup successfully" Jan 30 05:28:55.202089 kubelet[2720]: I0130 05:28:55.202010 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:28:55.210164 containerd[1502]: time="2025-01-30T05:28:55.210053655Z" level=info msg="StopPodSandbox for \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\"" Jan 30 05:28:55.210439 containerd[1502]: time="2025-01-30T05:28:55.210395094Z" level=info msg="Ensure that sandbox 85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480 in task-service has been cleanup successfully" Jan 30 05:28:55.211323 kubelet[2720]: I0130 05:28:55.211275 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:28:55.217181 containerd[1502]: time="2025-01-30T05:28:55.217127049Z" level=info msg="StopPodSandbox for \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\"" Jan 30 05:28:55.217441 containerd[1502]: time="2025-01-30T05:28:55.217329113Z" level=info msg="Ensure that sandbox 27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87 in task-service has been cleanup successfully" Jan 30 05:28:55.224966 kubelet[2720]: I0130 05:28:55.224833 2720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:28:55.233968 containerd[1502]: time="2025-01-30T05:28:55.233548970Z" level=info msg="StopPodSandbox for \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\"" Jan 30 05:28:55.233968 containerd[1502]: time="2025-01-30T05:28:55.233761774Z" level=info msg="Ensure that sandbox d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67 in task-service has been cleanup successfully" Jan 30 05:28:55.300480 containerd[1502]: time="2025-01-30T05:28:55.300142155Z" level=error msg="StopPodSandbox for \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\" failed" error="failed to destroy network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:55.300950 kubelet[2720]: E0130 05:28:55.300824 2720 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:28:55.301055 kubelet[2720]: E0130 05:28:55.300954 2720 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263"} Jan 30 05:28:55.301091 kubelet[2720]: E0130 05:28:55.301068 2720 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4912169-2df4-466e-ac9f-4416a8f727db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:28:55.301973 kubelet[2720]: E0130 05:28:55.301519 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4912169-2df4-466e-ac9f-4416a8f727db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-nbxvc" podUID="d4912169-2df4-466e-ac9f-4416a8f727db" Jan 30 05:28:55.305869 containerd[1502]: time="2025-01-30T05:28:55.305801411Z" level=error msg="StopPodSandbox for \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\" failed" error="failed to destroy network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:55.306083 kubelet[2720]: E0130 05:28:55.306034 2720 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:28:55.306127 kubelet[2720]: E0130 05:28:55.306087 2720 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87"} Jan 30 05:28:55.306127 kubelet[2720]: E0130 05:28:55.306116 2720 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f8dbe13-01dc-4a7e-bada-6efa483db38e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:28:55.306220 kubelet[2720]: E0130 05:28:55.306135 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f8dbe13-01dc-4a7e-bada-6efa483db38e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qz74f" podUID="4f8dbe13-01dc-4a7e-bada-6efa483db38e" Jan 30 05:28:55.316116 containerd[1502]: time="2025-01-30T05:28:55.315715604Z" level=error msg="StopPodSandbox for \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\" failed" error="failed to destroy network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:55.316240 kubelet[2720]: E0130 05:28:55.315969 2720 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:28:55.316240 kubelet[2720]: E0130 05:28:55.316016 2720 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe"} Jan 30 05:28:55.316240 kubelet[2720]: E0130 05:28:55.316048 2720 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69bf0978-c832-4b9d-bcaa-b4229f459b1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:28:55.316240 kubelet[2720]: E0130 05:28:55.316074 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69bf0978-c832-4b9d-bcaa-b4229f459b1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s695l" podUID="69bf0978-c832-4b9d-bcaa-b4229f459b1a" Jan 30 05:28:55.317515 kubelet[2720]: E0130 05:28:55.317209 2720 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:28:55.317515 kubelet[2720]: E0130 05:28:55.317231 2720 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c"} Jan 30 05:28:55.317515 kubelet[2720]: E0130 05:28:55.317318 2720 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a77443b-37f7-4875-b9d5-748a91d1aa99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:28:55.317515 kubelet[2720]: E0130 05:28:55.317336 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a77443b-37f7-4875-b9d5-748a91d1aa99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-c7cnm" podUID="3a77443b-37f7-4875-b9d5-748a91d1aa99" Jan 30 05:28:55.317682 containerd[1502]: time="2025-01-30T05:28:55.317030112Z" level=error msg="StopPodSandbox for \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\" failed" error="failed to destroy network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:55.323193 containerd[1502]: time="2025-01-30T05:28:55.323076744Z" level=error msg="StopPodSandbox for \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\" failed" error="failed to destroy network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:55.323595 kubelet[2720]: E0130 05:28:55.323453 2720 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:28:55.323595 kubelet[2720]: E0130 05:28:55.323512 2720 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480"} Jan 30 05:28:55.323595 kubelet[2720]: E0130 05:28:55.323544 2720 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bad96a0b-b018-45a1-96af-9edbd5119f12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:28:55.323595 kubelet[2720]: E0130 05:28:55.323568 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bad96a0b-b018-45a1-96af-9edbd5119f12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c7559cb77-j6lmn" podUID="bad96a0b-b018-45a1-96af-9edbd5119f12" Jan 30 05:28:55.326671 containerd[1502]: time="2025-01-30T05:28:55.326620800Z" level=error msg="StopPodSandbox for \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\" failed" error="failed to destroy network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:28:55.326937 kubelet[2720]: E0130 05:28:55.326862 2720 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:28:55.326993 kubelet[2720]: E0130 05:28:55.326949 2720 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67"} Jan 30 05:28:55.326993 kubelet[2720]: E0130 05:28:55.326983 2720 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89d6eda1-89d1-46d7-9c9a-f40abee39703\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:28:55.327074 kubelet[2720]: E0130 05:28:55.327010 2720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89d6eda1-89d1-46d7-9c9a-f40abee39703\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rlwbr" podUID="89d6eda1-89d1-46d7-9c9a-f40abee39703" Jan 30 05:28:59.991317 kubelet[2720]: I0130 05:28:59.990912 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:29:00.993589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081009641.mount: Deactivated successfully. Jan 30 05:29:01.253260 containerd[1502]: time="2025-01-30T05:29:01.203852378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 05:29:01.260959 containerd[1502]: time="2025-01-30T05:29:01.260855286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:01.266037 containerd[1502]: time="2025-01-30T05:29:01.266009189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.097120596s" Jan 30 05:29:01.266037 containerd[1502]: time="2025-01-30T05:29:01.266039087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 05:29:01.280922 containerd[1502]: time="2025-01-30T05:29:01.280119504Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:01.281317 containerd[1502]: time="2025-01-30T05:29:01.281283252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:01.368249 containerd[1502]: time="2025-01-30T05:29:01.368167666Z" level=info msg="CreateContainer within sandbox \"fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 05:29:01.592687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517353322.mount: Deactivated successfully. Jan 30 05:29:01.618845 containerd[1502]: time="2025-01-30T05:29:01.618767695Z" level=info msg="CreateContainer within sandbox \"fba1f21ec8b5395ad3fc3c530342fdcf2dd2218acdf0382e8dfde16c897b4cea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"88a59495951db75dacdb5fab342d21492cdb072f252891a52f4be37c66db1207\"" Jan 30 05:29:01.620090 containerd[1502]: time="2025-01-30T05:29:01.619835270Z" level=info msg="StartContainer for \"88a59495951db75dacdb5fab342d21492cdb072f252891a52f4be37c66db1207\"" Jan 30 05:29:01.868626 systemd[1]: Started cri-containerd-88a59495951db75dacdb5fab342d21492cdb072f252891a52f4be37c66db1207.scope - libcontainer container 88a59495951db75dacdb5fab342d21492cdb072f252891a52f4be37c66db1207. Jan 30 05:29:01.944899 containerd[1502]: time="2025-01-30T05:29:01.944818585Z" level=info msg="StartContainer for \"88a59495951db75dacdb5fab342d21492cdb072f252891a52f4be37c66db1207\" returns successfully" Jan 30 05:29:02.207601 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 05:29:02.209648 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 05:29:02.414569 kubelet[2720]: I0130 05:29:02.405267 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4tsk6" podStartSLOduration=2.539117295 podStartE2EDuration="29.386115638s" podCreationTimestamp="2025-01-30 05:28:33 +0000 UTC" firstStartedPulling="2025-01-30 05:28:34.438285723 +0000 UTC m=+11.577136691" lastFinishedPulling="2025-01-30 05:29:01.285284068 +0000 UTC m=+38.424135034" observedRunningTime="2025-01-30 05:29:02.385647316 +0000 UTC m=+39.524498303" watchObservedRunningTime="2025-01-30 05:29:02.386115638 +0000 UTC m=+39.524966606" Jan 30 05:29:03.410436 systemd[1]: run-containerd-runc-k8s.io-88a59495951db75dacdb5fab342d21492cdb072f252891a52f4be37c66db1207-runc.nEC88T.mount: Deactivated successfully. Jan 30 05:29:03.991001 kernel: bpftool[3975]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 05:29:04.312646 systemd-networkd[1397]: vxlan.calico: Link UP Jan 30 05:29:04.312656 systemd-networkd[1397]: vxlan.calico: Gained carrier Jan 30 05:29:05.381440 systemd-networkd[1397]: vxlan.calico: Gained IPv6LL Jan 30 05:29:05.994505 containerd[1502]: time="2025-01-30T05:29:05.993641273Z" level=info msg="StopPodSandbox for \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\"" Jan 30 05:29:05.994505 containerd[1502]: time="2025-01-30T05:29:05.993718289Z" level=info msg="StopPodSandbox for \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\"" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.139 [INFO][4073] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.139 [INFO][4073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" iface="eth0" netns="/var/run/netns/cni-00723cc6-f2f5-5a42-e60a-ef95157a23b1" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.140 [INFO][4073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" iface="eth0" netns="/var/run/netns/cni-00723cc6-f2f5-5a42-e60a-ef95157a23b1" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.144 [INFO][4073] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" iface="eth0" netns="/var/run/netns/cni-00723cc6-f2f5-5a42-e60a-ef95157a23b1" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.144 [INFO][4073] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.144 [INFO][4073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.414 [INFO][4089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.417 [INFO][4089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.418 [INFO][4089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.435 [WARNING][4089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.435 [INFO][4089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.437 [INFO][4089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:06.449828 containerd[1502]: 2025-01-30 05:29:06.442 [INFO][4073] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:06.463843 systemd[1]: run-netns-cni\x2d00723cc6\x2df2f5\x2d5a42\x2de60a\x2def95157a23b1.mount: Deactivated successfully. Jan 30 05:29:06.477203 containerd[1502]: time="2025-01-30T05:29:06.477113597Z" level=info msg="TearDown network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\" successfully" Jan 30 05:29:06.477379 containerd[1502]: time="2025-01-30T05:29:06.477299231Z" level=info msg="StopPodSandbox for \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\" returns successfully" Jan 30 05:29:06.478685 containerd[1502]: time="2025-01-30T05:29:06.478593283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qz74f,Uid:4f8dbe13-01dc-4a7e-bada-6efa483db38e,Namespace:kube-system,Attempt:1,}" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.141 [INFO][4081] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.141 [INFO][4081] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" iface="eth0" netns="/var/run/netns/cni-19f77d70-230f-c619-381a-60e5e49e5746" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.142 [INFO][4081] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" iface="eth0" netns="/var/run/netns/cni-19f77d70-230f-c619-381a-60e5e49e5746" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.144 [INFO][4081] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" iface="eth0" netns="/var/run/netns/cni-19f77d70-230f-c619-381a-60e5e49e5746" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.144 [INFO][4081] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.144 [INFO][4081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.414 [INFO][4090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.417 [INFO][4090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.437 [INFO][4090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.452 [WARNING][4090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.452 [INFO][4090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.458 [INFO][4090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:06.487943 containerd[1502]: 2025-01-30 05:29:06.479 [INFO][4081] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:06.489852 containerd[1502]: time="2025-01-30T05:29:06.488079319Z" level=info msg="TearDown network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\" successfully" Jan 30 05:29:06.489852 containerd[1502]: time="2025-01-30T05:29:06.488111601Z" level=info msg="StopPodSandbox for \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\" returns successfully" Jan 30 05:29:06.492873 containerd[1502]: time="2025-01-30T05:29:06.491689734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5fbd8b55-c7cnm,Uid:3a77443b-37f7-4875-b9d5-748a91d1aa99,Namespace:calico-apiserver,Attempt:1,}" Jan 30 05:29:06.501347 systemd[1]: run-netns-cni\x2d19f77d70\x2d230f\x2dc619\x2d381a\x2d60e5e49e5746.mount: Deactivated successfully. Jan 30 05:29:06.722179 systemd-networkd[1397]: calib738a745f10: Link UP Jan 30 05:29:06.724067 systemd-networkd[1397]: calib738a745f10: Gained carrier Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.613 [INFO][4110] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0 coredns-668d6bf9bc- kube-system 4f8dbe13-01dc-4a7e-bada-6efa483db38e 756 0 2025-01-30 05:28:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-d-6ba27b8de2 coredns-668d6bf9bc-qz74f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib738a745f10 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Namespace="kube-system" Pod="coredns-668d6bf9bc-qz74f" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.613 [INFO][4110] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Namespace="kube-system" Pod="coredns-668d6bf9bc-qz74f" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.661 [INFO][4123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" HandleID="k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.674 [INFO][4123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" HandleID="k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b690), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-d-6ba27b8de2", "pod":"coredns-668d6bf9bc-qz74f", "timestamp":"2025-01-30 05:29:06.661474624 +0000 UTC"}, Hostname:"ci-4081-3-0-d-6ba27b8de2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.674 [INFO][4123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.674 [INFO][4123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.674 [INFO][4123] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-6ba27b8de2' Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.677 [INFO][4123] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.686 [INFO][4123] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.693 [INFO][4123] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.695 [INFO][4123] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.697 [INFO][4123] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.697 [INFO][4123] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.700 [INFO][4123] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173 Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.705 [INFO][4123] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.712 [INFO][4123] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.1/26] block=192.168.35.0/26 handle="k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.712 [INFO][4123] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.1/26] handle="k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.712 [INFO][4123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:06.748957 containerd[1502]: 2025-01-30 05:29:06.712 [INFO][4123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.1/26] IPv6=[] ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" HandleID="k8s-pod-network.3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.750237 containerd[1502]: 2025-01-30 05:29:06.716 [INFO][4110] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Namespace="kube-system" Pod="coredns-668d6bf9bc-qz74f" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4f8dbe13-01dc-4a7e-bada-6efa483db38e", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"", Pod:"coredns-668d6bf9bc-qz74f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib738a745f10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:06.750237 containerd[1502]: 2025-01-30 05:29:06.717 [INFO][4110] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.1/32] ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Namespace="kube-system" Pod="coredns-668d6bf9bc-qz74f" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.750237 containerd[1502]: 2025-01-30 05:29:06.717 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib738a745f10 ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Namespace="kube-system" Pod="coredns-668d6bf9bc-qz74f" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.750237 containerd[1502]: 2025-01-30 05:29:06.723 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Namespace="kube-system" Pod="coredns-668d6bf9bc-qz74f" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.750237 containerd[1502]: 2025-01-30 05:29:06.724 [INFO][4110] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Namespace="kube-system" Pod="coredns-668d6bf9bc-qz74f" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4f8dbe13-01dc-4a7e-bada-6efa483db38e", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173", Pod:"coredns-668d6bf9bc-qz74f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib738a745f10", MAC:"4a:d0:a4:40:c4:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:06.750237 containerd[1502]: 2025-01-30 05:29:06.740 [INFO][4110] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173" Namespace="kube-system" Pod="coredns-668d6bf9bc-qz74f" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:06.815191 systemd-networkd[1397]: calief3ac26edc8: Link UP Jan 30 05:29:06.816312 systemd-networkd[1397]: calief3ac26edc8: Gained carrier Jan 30 05:29:06.830012 containerd[1502]: time="2025-01-30T05:29:06.829553537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:29:06.830012 containerd[1502]: time="2025-01-30T05:29:06.829622558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:29:06.830012 containerd[1502]: time="2025-01-30T05:29:06.829636095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:06.830012 containerd[1502]: time="2025-01-30T05:29:06.829722429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.627 [INFO][4102] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0 calico-apiserver-5c5fbd8b55- calico-apiserver 3a77443b-37f7-4875-b9d5-748a91d1aa99 757 0 2025-01-30 05:28:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c5fbd8b55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-d-6ba27b8de2 calico-apiserver-5c5fbd8b55-c7cnm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calief3ac26edc8 [] []}} ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-c7cnm" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.627 [INFO][4102] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-c7cnm" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.679 [INFO][4127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" HandleID="k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.687 [INFO][4127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" HandleID="k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ff140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-d-6ba27b8de2", "pod":"calico-apiserver-5c5fbd8b55-c7cnm", "timestamp":"2025-01-30 05:29:06.679081982 +0000 UTC"}, Hostname:"ci-4081-3-0-d-6ba27b8de2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.687 [INFO][4127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.713 [INFO][4127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.713 [INFO][4127] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-6ba27b8de2' Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.779 [INFO][4127] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.786 [INFO][4127] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.791 [INFO][4127] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.792 [INFO][4127] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.795 [INFO][4127] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.795 [INFO][4127] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.797 [INFO][4127] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81 Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.801 [INFO][4127] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.807 [INFO][4127] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.2/26] block=192.168.35.0/26 handle="k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.807 [INFO][4127] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.2/26] handle="k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.807 [INFO][4127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:06.851055 containerd[1502]: 2025-01-30 05:29:06.807 [INFO][4127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.2/26] IPv6=[] ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" HandleID="k8s-pod-network.24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.851717 containerd[1502]: 2025-01-30 05:29:06.811 [INFO][4102] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-c7cnm" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0", GenerateName:"calico-apiserver-5c5fbd8b55-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a77443b-37f7-4875-b9d5-748a91d1aa99", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5fbd8b55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"", Pod:"calico-apiserver-5c5fbd8b55-c7cnm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief3ac26edc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:06.851717 containerd[1502]: 2025-01-30 05:29:06.812 [INFO][4102] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.2/32] ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-c7cnm" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.851717 containerd[1502]: 2025-01-30 05:29:06.812 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief3ac26edc8 ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-c7cnm" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.851717 containerd[1502]: 2025-01-30 05:29:06.815 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-c7cnm" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.851717 containerd[1502]: 2025-01-30 05:29:06.817 [INFO][4102] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-c7cnm" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0", GenerateName:"calico-apiserver-5c5fbd8b55-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a77443b-37f7-4875-b9d5-748a91d1aa99", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5fbd8b55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81", Pod:"calico-apiserver-5c5fbd8b55-c7cnm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief3ac26edc8", MAC:"16:b8:62:4b:3b:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:06.851717 containerd[1502]: 2025-01-30 05:29:06.843 [INFO][4102] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-c7cnm" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:06.867067 systemd[1]: Started cri-containerd-3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173.scope - libcontainer container 3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173. Jan 30 05:29:06.892588 containerd[1502]: time="2025-01-30T05:29:06.892005970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:29:06.892588 containerd[1502]: time="2025-01-30T05:29:06.892162740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:29:06.892588 containerd[1502]: time="2025-01-30T05:29:06.892209739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:06.892588 containerd[1502]: time="2025-01-30T05:29:06.892382779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:06.920682 systemd[1]: Started cri-containerd-24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81.scope - libcontainer container 24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81. Jan 30 05:29:06.938923 containerd[1502]: time="2025-01-30T05:29:06.938637278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qz74f,Uid:4f8dbe13-01dc-4a7e-bada-6efa483db38e,Namespace:kube-system,Attempt:1,} returns sandbox id \"3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173\"" Jan 30 05:29:06.948729 containerd[1502]: time="2025-01-30T05:29:06.948345850Z" level=info msg="CreateContainer within sandbox \"3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:29:06.976598 containerd[1502]: time="2025-01-30T05:29:06.976434664Z" level=info msg="CreateContainer within sandbox \"3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aca5adb3b08c7f9120f8eae65563c74caba6bad327e9afe9397c6540e851838c\"" Jan 30 05:29:06.978296 containerd[1502]: time="2025-01-30T05:29:06.977253208Z" level=info msg="StartContainer for \"aca5adb3b08c7f9120f8eae65563c74caba6bad327e9afe9397c6540e851838c\"" Jan 30 05:29:06.996988 containerd[1502]: time="2025-01-30T05:29:06.996952060Z" level=info msg="StopPodSandbox for \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\"" Jan 30 05:29:07.008618 containerd[1502]: time="2025-01-30T05:29:07.008562659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5fbd8b55-c7cnm,Uid:3a77443b-37f7-4875-b9d5-748a91d1aa99,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81\"" Jan 30 05:29:07.020659 containerd[1502]: time="2025-01-30T05:29:07.020574037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 05:29:07.042028 systemd[1]: Started cri-containerd-aca5adb3b08c7f9120f8eae65563c74caba6bad327e9afe9397c6540e851838c.scope - libcontainer container aca5adb3b08c7f9120f8eae65563c74caba6bad327e9afe9397c6540e851838c. Jan 30 05:29:07.090562 containerd[1502]: time="2025-01-30T05:29:07.090378930Z" level=info msg="StartContainer for \"aca5adb3b08c7f9120f8eae65563c74caba6bad327e9afe9397c6540e851838c\" returns successfully" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.087 [INFO][4266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.088 [INFO][4266] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" iface="eth0" netns="/var/run/netns/cni-87346d9a-5eaa-14b8-3f9a-d669e4ad8f23" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.089 [INFO][4266] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" iface="eth0" netns="/var/run/netns/cni-87346d9a-5eaa-14b8-3f9a-d669e4ad8f23" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.091 [INFO][4266] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" iface="eth0" netns="/var/run/netns/cni-87346d9a-5eaa-14b8-3f9a-d669e4ad8f23" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.091 [INFO][4266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.091 [INFO][4266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.116 [INFO][4298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.116 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.116 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.123 [WARNING][4298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.123 [INFO][4298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.125 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:07.131534 containerd[1502]: 2025-01-30 05:29:07.128 [INFO][4266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:07.132524 containerd[1502]: time="2025-01-30T05:29:07.131674407Z" level=info msg="TearDown network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\" successfully" Jan 30 05:29:07.132524 containerd[1502]: time="2025-01-30T05:29:07.131698593Z" level=info msg="StopPodSandbox for \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\" returns successfully" Jan 30 05:29:07.132524 containerd[1502]: time="2025-01-30T05:29:07.132371187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rlwbr,Uid:89d6eda1-89d1-46d7-9c9a-f40abee39703,Namespace:calico-system,Attempt:1,}" Jan 30 05:29:07.274649 systemd-networkd[1397]: cali717bca916ac: Link UP Jan 30 05:29:07.275496 systemd-networkd[1397]: cali717bca916ac: Gained carrier Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.189 [INFO][4307] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0 csi-node-driver- calico-system 89d6eda1-89d1-46d7-9c9a-f40abee39703 769 0 2025-01-30 05:28:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-d-6ba27b8de2 csi-node-driver-rlwbr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali717bca916ac [] []}} ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Namespace="calico-system" Pod="csi-node-driver-rlwbr" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.190 [INFO][4307] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Namespace="calico-system" Pod="csi-node-driver-rlwbr" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.225 [INFO][4318] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" HandleID="k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.235 [INFO][4318] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" HandleID="k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-d-6ba27b8de2", "pod":"csi-node-driver-rlwbr", "timestamp":"2025-01-30 05:29:07.225588471 +0000 UTC"}, Hostname:"ci-4081-3-0-d-6ba27b8de2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.236 [INFO][4318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.236 [INFO][4318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.236 [INFO][4318] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-6ba27b8de2' Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.238 [INFO][4318] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.241 [INFO][4318] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.246 [INFO][4318] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.247 [INFO][4318] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.250 [INFO][4318] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.250 [INFO][4318] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.251 [INFO][4318] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458 Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.256 [INFO][4318] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.263 [INFO][4318] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.3/26] block=192.168.35.0/26 handle="k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.263 [INFO][4318] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.3/26] handle="k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.264 [INFO][4318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:07.297496 containerd[1502]: 2025-01-30 05:29:07.264 [INFO][4318] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.3/26] IPv6=[] ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" HandleID="k8s-pod-network.734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.298939 containerd[1502]: 2025-01-30 05:29:07.269 [INFO][4307] cni-plugin/k8s.go 386: Populated endpoint ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Namespace="calico-system" Pod="csi-node-driver-rlwbr" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89d6eda1-89d1-46d7-9c9a-f40abee39703", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"", Pod:"csi-node-driver-rlwbr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali717bca916ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:07.298939 containerd[1502]: 2025-01-30 05:29:07.269 [INFO][4307] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.3/32] ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Namespace="calico-system" Pod="csi-node-driver-rlwbr" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.298939 containerd[1502]: 2025-01-30 05:29:07.269 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali717bca916ac ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Namespace="calico-system" Pod="csi-node-driver-rlwbr" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.298939 containerd[1502]: 2025-01-30 05:29:07.275 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Namespace="calico-system" Pod="csi-node-driver-rlwbr" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.298939 containerd[1502]: 2025-01-30 05:29:07.276 [INFO][4307] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Namespace="calico-system" Pod="csi-node-driver-rlwbr" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89d6eda1-89d1-46d7-9c9a-f40abee39703", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458", Pod:"csi-node-driver-rlwbr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali717bca916ac", MAC:"56:91:16:3c:e2:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:07.298939 containerd[1502]: 2025-01-30 05:29:07.292 [INFO][4307] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458" Namespace="calico-system" Pod="csi-node-driver-rlwbr" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:07.341347 containerd[1502]: time="2025-01-30T05:29:07.341106752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:29:07.341347 containerd[1502]: time="2025-01-30T05:29:07.341170123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:29:07.341347 containerd[1502]: time="2025-01-30T05:29:07.341180332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:07.341347 containerd[1502]: time="2025-01-30T05:29:07.341267258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:07.374075 systemd[1]: Started cri-containerd-734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458.scope - libcontainer container 734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458. Jan 30 05:29:07.460598 systemd[1]: run-netns-cni\x2d87346d9a\x2d5eaa\x2d14b8\x2d3f9a\x2dd669e4ad8f23.mount: Deactivated successfully. Jan 30 05:29:07.481036 containerd[1502]: time="2025-01-30T05:29:07.480849473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rlwbr,Uid:89d6eda1-89d1-46d7-9c9a-f40abee39703,Namespace:calico-system,Attempt:1,} returns sandbox id \"734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458\"" Jan 30 05:29:07.993110 containerd[1502]: time="2025-01-30T05:29:07.993036136Z" level=info msg="StopPodSandbox for \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\"" Jan 30 05:29:08.093934 kubelet[2720]: I0130 05:29:08.093801 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qz74f" podStartSLOduration=41.093772597 podStartE2EDuration="41.093772597s" podCreationTimestamp="2025-01-30 05:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:29:07.424497011 +0000 UTC m=+44.563347997" watchObservedRunningTime="2025-01-30 05:29:08.093772597 +0000 UTC m=+45.232623595" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.095 [INFO][4399] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.095 [INFO][4399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" iface="eth0" netns="/var/run/netns/cni-9989c091-eee4-a341-9681-d2aa939b1e3a" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.096 [INFO][4399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" iface="eth0" netns="/var/run/netns/cni-9989c091-eee4-a341-9681-d2aa939b1e3a" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.098 [INFO][4399] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" iface="eth0" netns="/var/run/netns/cni-9989c091-eee4-a341-9681-d2aa939b1e3a" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.098 [INFO][4399] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.098 [INFO][4399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.145 [INFO][4405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.146 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.146 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.154 [WARNING][4405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.154 [INFO][4405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.157 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:08.163314 containerd[1502]: 2025-01-30 05:29:08.159 [INFO][4399] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:08.167973 containerd[1502]: time="2025-01-30T05:29:08.163844247Z" level=info msg="TearDown network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\" successfully" Jan 30 05:29:08.167973 containerd[1502]: time="2025-01-30T05:29:08.163866601Z" level=info msg="StopPodSandbox for \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\" returns successfully" Jan 30 05:29:08.167973 containerd[1502]: time="2025-01-30T05:29:08.164483019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5fbd8b55-nbxvc,Uid:d4912169-2df4-466e-ac9f-4416a8f727db,Namespace:calico-apiserver,Attempt:1,}" Jan 30 05:29:08.168695 systemd[1]: run-netns-cni\x2d9989c091\x2deee4\x2da341\x2d9681\x2dd2aa939b1e3a.mount: Deactivated successfully. Jan 30 05:29:08.199855 systemd-networkd[1397]: calib738a745f10: Gained IPv6LL Jan 30 05:29:08.330657 systemd-networkd[1397]: calibd72c7c7d3a: Link UP Jan 30 05:29:08.332043 systemd-networkd[1397]: calibd72c7c7d3a: Gained carrier Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.254 [INFO][4412] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0 calico-apiserver-5c5fbd8b55- calico-apiserver d4912169-2df4-466e-ac9f-4416a8f727db 782 0 2025-01-30 05:28:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c5fbd8b55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-d-6ba27b8de2 calico-apiserver-5c5fbd8b55-nbxvc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd72c7c7d3a [] []}} ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-nbxvc" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.254 [INFO][4412] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-nbxvc" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.281 [INFO][4424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" HandleID="k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.295 [INFO][4424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" HandleID="k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-d-6ba27b8de2", "pod":"calico-apiserver-5c5fbd8b55-nbxvc", "timestamp":"2025-01-30 05:29:08.281428494 +0000 UTC"}, Hostname:"ci-4081-3-0-d-6ba27b8de2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.295 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.295 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.295 [INFO][4424] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-6ba27b8de2' Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.298 [INFO][4424] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.303 [INFO][4424] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.307 [INFO][4424] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.309 [INFO][4424] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.312 [INFO][4424] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.312 [INFO][4424] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.313 [INFO][4424] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605 Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.317 [INFO][4424] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.323 [INFO][4424] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.4/26] block=192.168.35.0/26 handle="k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.323 [INFO][4424] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.4/26] handle="k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.323 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:08.358703 containerd[1502]: 2025-01-30 05:29:08.323 [INFO][4424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.4/26] IPv6=[] ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" HandleID="k8s-pod-network.c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.359322 containerd[1502]: 2025-01-30 05:29:08.328 [INFO][4412] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-nbxvc" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0", GenerateName:"calico-apiserver-5c5fbd8b55-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4912169-2df4-466e-ac9f-4416a8f727db", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5fbd8b55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"", Pod:"calico-apiserver-5c5fbd8b55-nbxvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd72c7c7d3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:08.359322 containerd[1502]: 2025-01-30 05:29:08.328 [INFO][4412] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.4/32] ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-nbxvc" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.359322 containerd[1502]: 2025-01-30 05:29:08.328 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd72c7c7d3a ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-nbxvc" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.359322 containerd[1502]: 2025-01-30 05:29:08.331 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-nbxvc" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.359322 containerd[1502]: 2025-01-30 05:29:08.333 [INFO][4412] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-nbxvc" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0", GenerateName:"calico-apiserver-5c5fbd8b55-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4912169-2df4-466e-ac9f-4416a8f727db", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5fbd8b55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605", Pod:"calico-apiserver-5c5fbd8b55-nbxvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd72c7c7d3a", MAC:"ba:05:cd:1f:97:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:08.359322 containerd[1502]: 2025-01-30 05:29:08.353 [INFO][4412] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605" Namespace="calico-apiserver" Pod="calico-apiserver-5c5fbd8b55-nbxvc" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:08.390547 systemd-networkd[1397]: cali717bca916ac: Gained IPv6LL Jan 30 05:29:08.399258 containerd[1502]: time="2025-01-30T05:29:08.399156234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:29:08.399768 containerd[1502]: time="2025-01-30T05:29:08.399544777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:29:08.399768 containerd[1502]: time="2025-01-30T05:29:08.399563442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:08.400411 containerd[1502]: time="2025-01-30T05:29:08.400196733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:08.445150 systemd[1]: Started cri-containerd-c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605.scope - libcontainer container c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605. Jan 30 05:29:08.517800 systemd-networkd[1397]: calief3ac26edc8: Gained IPv6LL Jan 30 05:29:08.566202 containerd[1502]: time="2025-01-30T05:29:08.566028430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c5fbd8b55-nbxvc,Uid:d4912169-2df4-466e-ac9f-4416a8f727db,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605\"" Jan 30 05:29:09.016512 containerd[1502]: time="2025-01-30T05:29:09.016150830Z" level=info msg="StopPodSandbox for \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\"" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.088 [INFO][4502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.088 [INFO][4502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" iface="eth0" netns="/var/run/netns/cni-9ca2ee61-b965-cc32-f00a-06e17de592ac" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.089 [INFO][4502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" iface="eth0" netns="/var/run/netns/cni-9ca2ee61-b965-cc32-f00a-06e17de592ac" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.089 [INFO][4502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" iface="eth0" netns="/var/run/netns/cni-9ca2ee61-b965-cc32-f00a-06e17de592ac" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.089 [INFO][4502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.089 [INFO][4502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.120 [INFO][4508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.120 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.121 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.126 [WARNING][4508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.126 [INFO][4508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.128 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:09.136951 containerd[1502]: 2025-01-30 05:29:09.133 [INFO][4502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:09.139103 containerd[1502]: time="2025-01-30T05:29:09.139054481Z" level=info msg="TearDown network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\" successfully" Jan 30 05:29:09.139280 containerd[1502]: time="2025-01-30T05:29:09.139194750Z" level=info msg="StopPodSandbox for \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\" returns successfully" Jan 30 05:29:09.141561 containerd[1502]: time="2025-01-30T05:29:09.141038564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s695l,Uid:69bf0978-c832-4b9d-bcaa-b4229f459b1a,Namespace:kube-system,Attempt:1,}" Jan 30 05:29:09.147057 systemd[1]: run-netns-cni\x2d9ca2ee61\x2db965\x2dcc32\x2df00a\x2d06e17de592ac.mount: Deactivated successfully. Jan 30 05:29:09.294288 systemd-networkd[1397]: cali32a1c6fe878: Link UP Jan 30 05:29:09.295601 systemd-networkd[1397]: cali32a1c6fe878: Gained carrier Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.207 [INFO][4515] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0 coredns-668d6bf9bc- kube-system 69bf0978-c832-4b9d-bcaa-b4229f459b1a 795 0 2025-01-30 05:28:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-d-6ba27b8de2 coredns-668d6bf9bc-s695l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali32a1c6fe878 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Namespace="kube-system" Pod="coredns-668d6bf9bc-s695l" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.207 [INFO][4515] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Namespace="kube-system" Pod="coredns-668d6bf9bc-s695l" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.254 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" HandleID="k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.261 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" HandleID="k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ab20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-d-6ba27b8de2", "pod":"coredns-668d6bf9bc-s695l", "timestamp":"2025-01-30 05:29:09.254561282 +0000 UTC"}, Hostname:"ci-4081-3-0-d-6ba27b8de2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.262 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.262 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.262 [INFO][4526] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-6ba27b8de2' Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.264 [INFO][4526] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.268 [INFO][4526] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.271 [INFO][4526] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.273 [INFO][4526] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.275 [INFO][4526] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.275 [INFO][4526] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.276 [INFO][4526] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435 Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.281 [INFO][4526] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.286 [INFO][4526] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.5/26] block=192.168.35.0/26 handle="k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.286 [INFO][4526] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.5/26] handle="k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.286 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:09.320069 containerd[1502]: 2025-01-30 05:29:09.286 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.5/26] IPv6=[] ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" HandleID="k8s-pod-network.75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.323170 containerd[1502]: 2025-01-30 05:29:09.289 [INFO][4515] cni-plugin/k8s.go 386: Populated endpoint ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Namespace="kube-system" Pod="coredns-668d6bf9bc-s695l" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69bf0978-c832-4b9d-bcaa-b4229f459b1a", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"", Pod:"coredns-668d6bf9bc-s695l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32a1c6fe878", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:09.323170 containerd[1502]: 2025-01-30 05:29:09.289 [INFO][4515] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.5/32] ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Namespace="kube-system" Pod="coredns-668d6bf9bc-s695l" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.323170 containerd[1502]: 2025-01-30 05:29:09.289 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32a1c6fe878 ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Namespace="kube-system" Pod="coredns-668d6bf9bc-s695l" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.323170 containerd[1502]: 2025-01-30 05:29:09.295 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Namespace="kube-system" Pod="coredns-668d6bf9bc-s695l" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.323170 containerd[1502]: 2025-01-30 05:29:09.295 [INFO][4515] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Namespace="kube-system" Pod="coredns-668d6bf9bc-s695l" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69bf0978-c832-4b9d-bcaa-b4229f459b1a", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435", Pod:"coredns-668d6bf9bc-s695l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32a1c6fe878", MAC:"ca:63:91:eb:9e:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:09.323170 containerd[1502]: 2025-01-30 05:29:09.311 [INFO][4515] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435" Namespace="kube-system" Pod="coredns-668d6bf9bc-s695l" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:09.386604 containerd[1502]: time="2025-01-30T05:29:09.384303719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:29:09.386604 containerd[1502]: time="2025-01-30T05:29:09.386262453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:29:09.386604 containerd[1502]: time="2025-01-30T05:29:09.386275408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:09.386604 containerd[1502]: time="2025-01-30T05:29:09.386357996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:09.416454 systemd[1]: Started cri-containerd-75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435.scope - libcontainer container 75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435. Jan 30 05:29:09.494481 containerd[1502]: time="2025-01-30T05:29:09.494431696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s695l,Uid:69bf0978-c832-4b9d-bcaa-b4229f459b1a,Namespace:kube-system,Attempt:1,} returns sandbox id \"75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435\"" Jan 30 05:29:09.503280 containerd[1502]: time="2025-01-30T05:29:09.503237501Z" level=info msg="CreateContainer within sandbox \"75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:29:09.529257 containerd[1502]: time="2025-01-30T05:29:09.529088253Z" level=info msg="CreateContainer within sandbox \"75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c90bc2487663dbd5ee04aea3042e426ce4f90c6201ea478c4c7bd9145ec3803d\"" Jan 30 05:29:09.531127 containerd[1502]: time="2025-01-30T05:29:09.531084080Z" level=info msg="StartContainer for \"c90bc2487663dbd5ee04aea3042e426ce4f90c6201ea478c4c7bd9145ec3803d\"" Jan 30 05:29:09.585015 systemd[1]: Started cri-containerd-c90bc2487663dbd5ee04aea3042e426ce4f90c6201ea478c4c7bd9145ec3803d.scope - libcontainer container c90bc2487663dbd5ee04aea3042e426ce4f90c6201ea478c4c7bd9145ec3803d. Jan 30 05:29:09.635952 containerd[1502]: time="2025-01-30T05:29:09.635877167Z" level=info msg="StartContainer for \"c90bc2487663dbd5ee04aea3042e426ce4f90c6201ea478c4c7bd9145ec3803d\" returns successfully" Jan 30 05:29:09.992392 containerd[1502]: time="2025-01-30T05:29:09.992268841Z" level=info msg="StopPodSandbox for \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\"" Jan 30 05:29:10.118044 systemd-networkd[1397]: calibd72c7c7d3a: Gained IPv6LL Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.074 [INFO][4650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.074 [INFO][4650] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" iface="eth0" netns="/var/run/netns/cni-fd393c60-1b29-b83e-01ef-fc9eb88cd78d" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.075 [INFO][4650] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" iface="eth0" netns="/var/run/netns/cni-fd393c60-1b29-b83e-01ef-fc9eb88cd78d" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.075 [INFO][4650] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" iface="eth0" netns="/var/run/netns/cni-fd393c60-1b29-b83e-01ef-fc9eb88cd78d" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.076 [INFO][4650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.076 [INFO][4650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.162 [INFO][4657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.162 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.163 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.173 [WARNING][4657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.173 [INFO][4657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.176 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:10.182587 containerd[1502]: 2025-01-30 05:29:10.179 [INFO][4650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:10.183854 containerd[1502]: time="2025-01-30T05:29:10.183115945Z" level=info msg="TearDown network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\" successfully" Jan 30 05:29:10.183854 containerd[1502]: time="2025-01-30T05:29:10.183159167Z" level=info msg="StopPodSandbox for \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\" returns successfully" Jan 30 05:29:10.184108 containerd[1502]: time="2025-01-30T05:29:10.184066463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7559cb77-j6lmn,Uid:bad96a0b-b018-45a1-96af-9edbd5119f12,Namespace:calico-system,Attempt:1,}" Jan 30 05:29:10.305725 containerd[1502]: time="2025-01-30T05:29:10.305669535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:10.308099 containerd[1502]: time="2025-01-30T05:29:10.308028727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 05:29:10.309619 containerd[1502]: time="2025-01-30T05:29:10.309585414Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:10.313417 containerd[1502]: time="2025-01-30T05:29:10.313351497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:10.314646 containerd[1502]: time="2025-01-30T05:29:10.314596598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.293984929s" Jan 30 05:29:10.314646 containerd[1502]: time="2025-01-30T05:29:10.314630172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 05:29:10.318079 containerd[1502]: time="2025-01-30T05:29:10.317108071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 05:29:10.319879 containerd[1502]: time="2025-01-30T05:29:10.319792415Z" level=info msg="CreateContainer within sandbox \"24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 05:29:10.340119 containerd[1502]: time="2025-01-30T05:29:10.340041159Z" level=info msg="CreateContainer within sandbox \"24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2143eaab40271153134fdc56726c7842a038bd2a39fad64836940566b81f16e4\"" Jan 30 05:29:10.345229 containerd[1502]: time="2025-01-30T05:29:10.341177743Z" level=info msg="StartContainer for \"2143eaab40271153134fdc56726c7842a038bd2a39fad64836940566b81f16e4\"" Jan 30 05:29:10.348140 systemd-networkd[1397]: cali966d4ef5036: Link UP Jan 30 05:29:10.349807 systemd-networkd[1397]: cali966d4ef5036: Gained carrier Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.249 [INFO][4663] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0 calico-kube-controllers-5c7559cb77- calico-system bad96a0b-b018-45a1-96af-9edbd5119f12 804 0 2025-01-30 05:28:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c7559cb77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-d-6ba27b8de2 calico-kube-controllers-5c7559cb77-j6lmn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali966d4ef5036 [] []}} ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Namespace="calico-system" Pod="calico-kube-controllers-5c7559cb77-j6lmn" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.249 [INFO][4663] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Namespace="calico-system" Pod="calico-kube-controllers-5c7559cb77-j6lmn" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.292 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" HandleID="k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.302 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" HandleID="k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bacb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-d-6ba27b8de2", "pod":"calico-kube-controllers-5c7559cb77-j6lmn", "timestamp":"2025-01-30 05:29:10.292245332 +0000 UTC"}, Hostname:"ci-4081-3-0-d-6ba27b8de2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.302 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.302 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.302 [INFO][4676] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-d-6ba27b8de2' Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.304 [INFO][4676] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.309 [INFO][4676] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.313 [INFO][4676] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.315 [INFO][4676] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.321 [INFO][4676] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.321 [INFO][4676] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.323 [INFO][4676] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.327 [INFO][4676] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.333 [INFO][4676] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.6/26] block=192.168.35.0/26 handle="k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.333 [INFO][4676] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.6/26] handle="k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" host="ci-4081-3-0-d-6ba27b8de2" Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.333 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:10.379641 containerd[1502]: 2025-01-30 05:29:10.334 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.6/26] IPv6=[] ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" HandleID="k8s-pod-network.712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.380862 containerd[1502]: 2025-01-30 05:29:10.339 [INFO][4663] cni-plugin/k8s.go 386: Populated endpoint ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Namespace="calico-system" Pod="calico-kube-controllers-5c7559cb77-j6lmn" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0", GenerateName:"calico-kube-controllers-5c7559cb77-", Namespace:"calico-system", SelfLink:"", UID:"bad96a0b-b018-45a1-96af-9edbd5119f12", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7559cb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"", Pod:"calico-kube-controllers-5c7559cb77-j6lmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966d4ef5036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:10.380862 containerd[1502]: 2025-01-30 05:29:10.339 [INFO][4663] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.6/32] ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Namespace="calico-system" Pod="calico-kube-controllers-5c7559cb77-j6lmn" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.380862 containerd[1502]: 2025-01-30 05:29:10.339 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali966d4ef5036 ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Namespace="calico-system" Pod="calico-kube-controllers-5c7559cb77-j6lmn" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.380862 containerd[1502]: 2025-01-30 05:29:10.349 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Namespace="calico-system" Pod="calico-kube-controllers-5c7559cb77-j6lmn" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.380862 containerd[1502]: 2025-01-30 05:29:10.350 [INFO][4663] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Namespace="calico-system" Pod="calico-kube-controllers-5c7559cb77-j6lmn" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0", GenerateName:"calico-kube-controllers-5c7559cb77-", Namespace:"calico-system", SelfLink:"", UID:"bad96a0b-b018-45a1-96af-9edbd5119f12", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7559cb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b", Pod:"calico-kube-controllers-5c7559cb77-j6lmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966d4ef5036", MAC:"de:dd:19:28:b7:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:10.380862 containerd[1502]: 2025-01-30 05:29:10.368 [INFO][4663] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b" Namespace="calico-system" Pod="calico-kube-controllers-5c7559cb77-j6lmn" WorkloadEndpoint="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:10.399112 systemd[1]: Started cri-containerd-2143eaab40271153134fdc56726c7842a038bd2a39fad64836940566b81f16e4.scope - libcontainer container 2143eaab40271153134fdc56726c7842a038bd2a39fad64836940566b81f16e4. Jan 30 05:29:10.425148 containerd[1502]: time="2025-01-30T05:29:10.423475535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:29:10.425148 containerd[1502]: time="2025-01-30T05:29:10.424081525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:29:10.425148 containerd[1502]: time="2025-01-30T05:29:10.424094379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:10.425148 containerd[1502]: time="2025-01-30T05:29:10.424167408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:29:10.438812 kubelet[2720]: I0130 05:29:10.438498 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s695l" podStartSLOduration=43.438481294 podStartE2EDuration="43.438481294s" podCreationTimestamp="2025-01-30 05:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:29:10.437995635 +0000 UTC m=+47.576846602" watchObservedRunningTime="2025-01-30 05:29:10.438481294 +0000 UTC m=+47.577332260" Jan 30 05:29:10.448949 systemd[1]: Started cri-containerd-712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b.scope - libcontainer container 712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b. Jan 30 05:29:10.459148 systemd[1]: run-containerd-runc-k8s.io-c90bc2487663dbd5ee04aea3042e426ce4f90c6201ea478c4c7bd9145ec3803d-runc.06q0Sc.mount: Deactivated successfully. Jan 30 05:29:10.459432 systemd[1]: run-netns-cni\x2dfd393c60\x2d1b29\x2db83e\x2d01ef\x2dfc9eb88cd78d.mount: Deactivated successfully. Jan 30 05:29:10.493837 containerd[1502]: time="2025-01-30T05:29:10.493749813Z" level=info msg="StartContainer for \"2143eaab40271153134fdc56726c7842a038bd2a39fad64836940566b81f16e4\" returns successfully" Jan 30 05:29:10.575186 containerd[1502]: time="2025-01-30T05:29:10.574937872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7559cb77-j6lmn,Uid:bad96a0b-b018-45a1-96af-9edbd5119f12,Namespace:calico-system,Attempt:1,} returns sandbox id \"712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b\"" Jan 30 05:29:10.949171 systemd-networkd[1397]: cali32a1c6fe878: Gained IPv6LL Jan 30 05:29:11.510226 kubelet[2720]: I0130 05:29:11.509788 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-c7cnm" podStartSLOduration=35.205991127 podStartE2EDuration="38.509761765s" podCreationTimestamp="2025-01-30 05:28:33 +0000 UTC" firstStartedPulling="2025-01-30 05:29:07.012776259 +0000 UTC m=+44.151627227" lastFinishedPulling="2025-01-30 05:29:10.316546898 +0000 UTC m=+47.455397865" observedRunningTime="2025-01-30 05:29:11.467683185 +0000 UTC m=+48.606534202" watchObservedRunningTime="2025-01-30 05:29:11.509761765 +0000 UTC m=+48.648612742" Jan 30 05:29:11.526025 systemd-networkd[1397]: cali966d4ef5036: Gained IPv6LL Jan 30 05:29:11.717225 containerd[1502]: time="2025-01-30T05:29:11.717173851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:11.718940 containerd[1502]: time="2025-01-30T05:29:11.718761709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 05:29:11.720771 containerd[1502]: time="2025-01-30T05:29:11.720222113Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:11.722793 containerd[1502]: time="2025-01-30T05:29:11.722740401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:11.723591 containerd[1502]: time="2025-01-30T05:29:11.723199929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.406068103s" Jan 30 05:29:11.723591 containerd[1502]: time="2025-01-30T05:29:11.723229646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 05:29:11.724593 containerd[1502]: time="2025-01-30T05:29:11.724399535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 05:29:11.727479 containerd[1502]: time="2025-01-30T05:29:11.727435993Z" level=info msg="CreateContainer within sandbox \"734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 05:29:11.759982 containerd[1502]: time="2025-01-30T05:29:11.759926220Z" level=info msg="CreateContainer within sandbox \"734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e94f27308080b5053bda30cc3667a19d3ea8137c8aaa67d1c808ba3116791212\"" Jan 30 05:29:11.760875 containerd[1502]: time="2025-01-30T05:29:11.760760366Z" level=info msg="StartContainer for \"e94f27308080b5053bda30cc3667a19d3ea8137c8aaa67d1c808ba3116791212\"" Jan 30 05:29:11.803109 systemd[1]: Started cri-containerd-e94f27308080b5053bda30cc3667a19d3ea8137c8aaa67d1c808ba3116791212.scope - libcontainer container e94f27308080b5053bda30cc3667a19d3ea8137c8aaa67d1c808ba3116791212. Jan 30 05:29:11.839240 containerd[1502]: time="2025-01-30T05:29:11.839077543Z" level=info msg="StartContainer for \"e94f27308080b5053bda30cc3667a19d3ea8137c8aaa67d1c808ba3116791212\" returns successfully" Jan 30 05:29:12.111572 containerd[1502]: time="2025-01-30T05:29:12.111420488Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:12.113313 containerd[1502]: time="2025-01-30T05:29:12.113216305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 05:29:12.117485 containerd[1502]: time="2025-01-30T05:29:12.117422325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 392.991559ms" Jan 30 05:29:12.117485 containerd[1502]: time="2025-01-30T05:29:12.117479183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 05:29:12.119167 containerd[1502]: time="2025-01-30T05:29:12.119080016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 05:29:12.123629 containerd[1502]: time="2025-01-30T05:29:12.123502109Z" level=info msg="CreateContainer within sandbox \"c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 05:29:12.147435 containerd[1502]: time="2025-01-30T05:29:12.146559203Z" level=info msg="CreateContainer within sandbox \"c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4fa6210366e229175d3166620b3e84548d7a20cfe0179717c69b92cd9333fa6a\"" Jan 30 05:29:12.153460 containerd[1502]: time="2025-01-30T05:29:12.151197899Z" level=info msg="StartContainer for \"4fa6210366e229175d3166620b3e84548d7a20cfe0179717c69b92cd9333fa6a\"" Jan 30 05:29:12.198101 systemd[1]: Started cri-containerd-4fa6210366e229175d3166620b3e84548d7a20cfe0179717c69b92cd9333fa6a.scope - libcontainer container 4fa6210366e229175d3166620b3e84548d7a20cfe0179717c69b92cd9333fa6a. Jan 30 05:29:12.277435 containerd[1502]: time="2025-01-30T05:29:12.277384449Z" level=info msg="StartContainer for \"4fa6210366e229175d3166620b3e84548d7a20cfe0179717c69b92cd9333fa6a\" returns successfully" Jan 30 05:29:13.466812 kubelet[2720]: I0130 05:29:13.466750 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:29:14.385517 containerd[1502]: time="2025-01-30T05:29:14.385446346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:14.387010 containerd[1502]: time="2025-01-30T05:29:14.386954926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 05:29:14.389164 containerd[1502]: time="2025-01-30T05:29:14.389121915Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:14.392019 containerd[1502]: time="2025-01-30T05:29:14.391963926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:14.392588 containerd[1502]: time="2025-01-30T05:29:14.392555979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.273433181s" Jan 30 05:29:14.392672 containerd[1502]: time="2025-01-30T05:29:14.392589544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 05:29:14.393877 containerd[1502]: time="2025-01-30T05:29:14.393741530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 05:29:14.409852 containerd[1502]: time="2025-01-30T05:29:14.409797845Z" level=info msg="CreateContainer within sandbox \"712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 05:29:14.429993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995387523.mount: Deactivated successfully. Jan 30 05:29:14.431256 containerd[1502]: time="2025-01-30T05:29:14.431166431Z" level=info msg="CreateContainer within sandbox \"712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262\"" Jan 30 05:29:14.431791 containerd[1502]: time="2025-01-30T05:29:14.431772611Z" level=info msg="StartContainer for \"d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262\"" Jan 30 05:29:14.479029 systemd[1]: Started cri-containerd-d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262.scope - libcontainer container d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262. Jan 30 05:29:14.532481 containerd[1502]: time="2025-01-30T05:29:14.532306994Z" level=info msg="StartContainer for \"d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262\" returns successfully" Jan 30 05:29:15.537976 kubelet[2720]: I0130 05:29:15.537138 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c5fbd8b55-nbxvc" podStartSLOduration=38.985664577 podStartE2EDuration="42.537118791s" podCreationTimestamp="2025-01-30 05:28:33 +0000 UTC" firstStartedPulling="2025-01-30 05:29:08.567320248 +0000 UTC m=+45.706171215" lastFinishedPulling="2025-01-30 05:29:12.118774421 +0000 UTC m=+49.257625429" observedRunningTime="2025-01-30 05:29:12.471139371 +0000 UTC m=+49.609990339" watchObservedRunningTime="2025-01-30 05:29:15.537118791 +0000 UTC m=+52.675969778" Jan 30 05:29:15.578828 kubelet[2720]: I0130 05:29:15.578760 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c7559cb77-j6lmn" podStartSLOduration=37.76198522 podStartE2EDuration="41.578741342s" podCreationTimestamp="2025-01-30 05:28:34 +0000 UTC" firstStartedPulling="2025-01-30 05:29:10.576828248 +0000 UTC m=+47.715679215" lastFinishedPulling="2025-01-30 05:29:14.393584369 +0000 UTC m=+51.532435337" observedRunningTime="2025-01-30 05:29:15.539990672 +0000 UTC m=+52.678841659" watchObservedRunningTime="2025-01-30 05:29:15.578741342 +0000 UTC m=+52.717592299" Jan 30 05:29:15.831861 containerd[1502]: time="2025-01-30T05:29:15.831142163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:15.832575 containerd[1502]: time="2025-01-30T05:29:15.832527367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 05:29:15.833072 containerd[1502]: time="2025-01-30T05:29:15.832954354Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:15.835101 containerd[1502]: time="2025-01-30T05:29:15.835053866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:29:15.835938 containerd[1502]: time="2025-01-30T05:29:15.835578711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.441813085s" Jan 30 05:29:15.835938 containerd[1502]: time="2025-01-30T05:29:15.835606093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 05:29:15.838641 containerd[1502]: time="2025-01-30T05:29:15.838607261Z" level=info msg="CreateContainer within sandbox \"734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 05:29:15.860345 containerd[1502]: time="2025-01-30T05:29:15.860297255Z" level=info msg="CreateContainer within sandbox \"734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8d796962ab93ed89eb0b4de7de5e56a0b29b514e019b4829c8bc80cf2a62e381\"" Jan 30 05:29:15.862667 containerd[1502]: time="2025-01-30T05:29:15.862570970Z" level=info msg="StartContainer for \"8d796962ab93ed89eb0b4de7de5e56a0b29b514e019b4829c8bc80cf2a62e381\"" Jan 30 05:29:15.895220 systemd[1]: Started cri-containerd-8d796962ab93ed89eb0b4de7de5e56a0b29b514e019b4829c8bc80cf2a62e381.scope - libcontainer container 8d796962ab93ed89eb0b4de7de5e56a0b29b514e019b4829c8bc80cf2a62e381. Jan 30 05:29:15.930352 containerd[1502]: time="2025-01-30T05:29:15.930302368Z" level=info msg="StartContainer for \"8d796962ab93ed89eb0b4de7de5e56a0b29b514e019b4829c8bc80cf2a62e381\" returns successfully" Jan 30 05:29:16.365655 kubelet[2720]: I0130 05:29:16.365579 2720 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 05:29:16.369686 kubelet[2720]: I0130 05:29:16.369635 2720 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 05:29:23.045976 containerd[1502]: time="2025-01-30T05:29:23.045482450Z" level=info msg="StopPodSandbox for \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\"" Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.198 [WARNING][4994] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0", GenerateName:"calico-apiserver-5c5fbd8b55-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4912169-2df4-466e-ac9f-4416a8f727db", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5fbd8b55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605", Pod:"calico-apiserver-5c5fbd8b55-nbxvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd72c7c7d3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.200 [INFO][4994] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.200 [INFO][4994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" iface="eth0" netns="" Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.200 [INFO][4994] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.200 [INFO][4994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.241 [INFO][5000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.241 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.241 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.249 [WARNING][5000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.249 [INFO][5000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.251 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:23.261910 containerd[1502]: 2025-01-30 05:29:23.258 [INFO][4994] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:23.265030 containerd[1502]: time="2025-01-30T05:29:23.261957279Z" level=info msg="TearDown network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\" successfully" Jan 30 05:29:23.265030 containerd[1502]: time="2025-01-30T05:29:23.261996634Z" level=info msg="StopPodSandbox for \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\" returns successfully" Jan 30 05:29:23.265030 containerd[1502]: time="2025-01-30T05:29:23.262782503Z" level=info msg="RemovePodSandbox for \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\"" Jan 30 05:29:23.268904 containerd[1502]: time="2025-01-30T05:29:23.268827463Z" level=info msg="Forcibly stopping sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\"" Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.330 [WARNING][5018] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0", GenerateName:"calico-apiserver-5c5fbd8b55-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4912169-2df4-466e-ac9f-4416a8f727db", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5fbd8b55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"c930df2af9707562c2a8960ef0e167fbf441c7a4295e4c30931f0d7686807605", Pod:"calico-apiserver-5c5fbd8b55-nbxvc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd72c7c7d3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.330 [INFO][5018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.330 [INFO][5018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" iface="eth0" netns="" Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.330 [INFO][5018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.330 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.361 [INFO][5025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.362 [INFO][5025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.362 [INFO][5025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.369 [WARNING][5025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.369 [INFO][5025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" HandleID="k8s-pod-network.d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--nbxvc-eth0" Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.371 [INFO][5025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:23.378872 containerd[1502]: 2025-01-30 05:29:23.375 [INFO][5018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263" Jan 30 05:29:23.378872 containerd[1502]: time="2025-01-30T05:29:23.378474352Z" level=info msg="TearDown network for sandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\" successfully" Jan 30 05:29:23.393287 containerd[1502]: time="2025-01-30T05:29:23.393210151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:29:23.393480 containerd[1502]: time="2025-01-30T05:29:23.393319370Z" level=info msg="RemovePodSandbox \"d7adc2916c4d8d2994121b965ed898607dfcb8ba2a065d5820f72e23aa601263\" returns successfully" Jan 30 05:29:23.394018 containerd[1502]: time="2025-01-30T05:29:23.393968707Z" level=info msg="StopPodSandbox for \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\"" Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.453 [WARNING][5043] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69bf0978-c832-4b9d-bcaa-b4229f459b1a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435", Pod:"coredns-668d6bf9bc-s695l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32a1c6fe878", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.454 [INFO][5043] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.454 [INFO][5043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" iface="eth0" netns="" Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.454 [INFO][5043] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.454 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.479 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.479 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.479 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.484 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.485 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.486 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:23.492311 containerd[1502]: 2025-01-30 05:29:23.489 [INFO][5043] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:23.492945 containerd[1502]: time="2025-01-30T05:29:23.492340198Z" level=info msg="TearDown network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\" successfully" Jan 30 05:29:23.492945 containerd[1502]: time="2025-01-30T05:29:23.492365016Z" level=info msg="StopPodSandbox for \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\" returns successfully" Jan 30 05:29:23.492945 containerd[1502]: time="2025-01-30T05:29:23.492857531Z" level=info msg="RemovePodSandbox for \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\"" Jan 30 05:29:23.492945 containerd[1502]: time="2025-01-30T05:29:23.492879823Z" level=info msg="Forcibly stopping sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\"" Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.535 [WARNING][5068] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69bf0978-c832-4b9d-bcaa-b4229f459b1a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"75f2ae499afbada1da5d29aefb6ceaf98df90d494b8d384a7a6288b3764ce435", Pod:"coredns-668d6bf9bc-s695l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32a1c6fe878", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.535 [INFO][5068] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.535 [INFO][5068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" iface="eth0" netns="" Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.535 [INFO][5068] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.535 [INFO][5068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.572 [INFO][5075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.572 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.572 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.578 [WARNING][5075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.579 [INFO][5075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" HandleID="k8s-pod-network.68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--s695l-eth0" Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.580 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:23.588414 containerd[1502]: 2025-01-30 05:29:23.583 [INFO][5068] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe" Jan 30 05:29:23.589511 containerd[1502]: time="2025-01-30T05:29:23.588443796Z" level=info msg="TearDown network for sandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\" successfully" Jan 30 05:29:23.599879 containerd[1502]: time="2025-01-30T05:29:23.599393560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:29:23.599879 containerd[1502]: time="2025-01-30T05:29:23.599472782Z" level=info msg="RemovePodSandbox \"68b9c6b866f29881ff28645116b09ce77112337ff480f2f6d7ec1dd94bd755fe\" returns successfully" Jan 30 05:29:23.600436 containerd[1502]: time="2025-01-30T05:29:23.600052134Z" level=info msg="StopPodSandbox for \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\"" Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.652 [WARNING][5093] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89d6eda1-89d1-46d7-9c9a-f40abee39703", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458", Pod:"csi-node-driver-rlwbr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali717bca916ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.652 [INFO][5093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.652 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" iface="eth0" netns="" Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.653 [INFO][5093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.653 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.685 [INFO][5099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.685 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.685 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.694 [WARNING][5099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.694 [INFO][5099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.696 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:23.704564 containerd[1502]: 2025-01-30 05:29:23.701 [INFO][5093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:23.705283 containerd[1502]: time="2025-01-30T05:29:23.704556544Z" level=info msg="TearDown network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\" successfully" Jan 30 05:29:23.705283 containerd[1502]: time="2025-01-30T05:29:23.704588015Z" level=info msg="StopPodSandbox for \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\" returns successfully" Jan 30 05:29:23.705283 containerd[1502]: time="2025-01-30T05:29:23.705198467Z" level=info msg="RemovePodSandbox for \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\"" Jan 30 05:29:23.705283 containerd[1502]: time="2025-01-30T05:29:23.705229095Z" level=info msg="Forcibly stopping sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\"" Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.760 [WARNING][5117] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"89d6eda1-89d1-46d7-9c9a-f40abee39703", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"734b22c8a04bc522de1d30cbe78713bc2ecdb6b7e52f222f747db6f66196d458", Pod:"csi-node-driver-rlwbr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali717bca916ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.760 [INFO][5117] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.761 [INFO][5117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" iface="eth0" netns="" Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.761 [INFO][5117] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.761 [INFO][5117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.789 [INFO][5124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.790 [INFO][5124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.790 [INFO][5124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.796 [WARNING][5124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.797 [INFO][5124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" HandleID="k8s-pod-network.d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-csi--node--driver--rlwbr-eth0" Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.798 [INFO][5124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:23.806061 containerd[1502]: 2025-01-30 05:29:23.802 [INFO][5117] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67" Jan 30 05:29:23.807098 containerd[1502]: time="2025-01-30T05:29:23.806088129Z" level=info msg="TearDown network for sandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\" successfully" Jan 30 05:29:23.811216 containerd[1502]: time="2025-01-30T05:29:23.811154703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:29:23.811216 containerd[1502]: time="2025-01-30T05:29:23.811203527Z" level=info msg="RemovePodSandbox \"d434a09840e8f4179069cc5d43e1c90cfa67c393d5bd810578ecf1120aa4cb67\" returns successfully" Jan 30 05:29:23.811808 containerd[1502]: time="2025-01-30T05:29:23.811745116Z" level=info msg="StopPodSandbox for \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\"" Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.857 [WARNING][5142] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0", GenerateName:"calico-apiserver-5c5fbd8b55-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a77443b-37f7-4875-b9d5-748a91d1aa99", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5fbd8b55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81", Pod:"calico-apiserver-5c5fbd8b55-c7cnm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief3ac26edc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.857 [INFO][5142] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.857 [INFO][5142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" iface="eth0" netns="" Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.857 [INFO][5142] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.857 [INFO][5142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.879 [INFO][5149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.880 [INFO][5149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.880 [INFO][5149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.888 [WARNING][5149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.888 [INFO][5149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.890 [INFO][5149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:23.899588 containerd[1502]: 2025-01-30 05:29:23.895 [INFO][5142] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:23.902235 containerd[1502]: time="2025-01-30T05:29:23.899614824Z" level=info msg="TearDown network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\" successfully" Jan 30 05:29:23.902235 containerd[1502]: time="2025-01-30T05:29:23.899656393Z" level=info msg="StopPodSandbox for \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\" returns successfully" Jan 30 05:29:23.902235 containerd[1502]: time="2025-01-30T05:29:23.900263137Z" level=info msg="RemovePodSandbox for \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\"" Jan 30 05:29:23.902235 containerd[1502]: time="2025-01-30T05:29:23.900295359Z" level=info msg="Forcibly stopping sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\"" Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.942 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0", GenerateName:"calico-apiserver-5c5fbd8b55-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a77443b-37f7-4875-b9d5-748a91d1aa99", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c5fbd8b55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"24b43ff5c9274b01b91be7f39d182b68ae911b4102a1ff8f3b04c5937b097a81", Pod:"calico-apiserver-5c5fbd8b55-c7cnm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calief3ac26edc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.947 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.947 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" iface="eth0" netns="" Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.947 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.947 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.982 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.982 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.983 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.991 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.991 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" HandleID="k8s-pod-network.ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--apiserver--5c5fbd8b55--c7cnm-eth0" Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.994 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:24.000248 containerd[1502]: 2025-01-30 05:29:23.997 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c" Jan 30 05:29:24.000248 containerd[1502]: time="2025-01-30T05:29:24.000191205Z" level=info msg="TearDown network for sandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\" successfully" Jan 30 05:29:24.005713 containerd[1502]: time="2025-01-30T05:29:24.005633392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:29:24.006360 containerd[1502]: time="2025-01-30T05:29:24.005738814Z" level=info msg="RemovePodSandbox \"ff8a2f10f8421d10cddbb06d80092057855b28e751062d0a888270724280f05c\" returns successfully" Jan 30 05:29:24.006553 containerd[1502]: time="2025-01-30T05:29:24.006519984Z" level=info msg="StopPodSandbox for \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\"" Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.058 [WARNING][5193] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4f8dbe13-01dc-4a7e-bada-6efa483db38e", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173", Pod:"coredns-668d6bf9bc-qz74f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib738a745f10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.058 [INFO][5193] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.058 [INFO][5193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" iface="eth0" netns="" Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.058 [INFO][5193] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.058 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.094 [INFO][5199] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.094 [INFO][5199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.094 [INFO][5199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.103 [WARNING][5199] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.103 [INFO][5199] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.105 [INFO][5199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:24.113165 containerd[1502]: 2025-01-30 05:29:24.108 [INFO][5193] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:24.113979 containerd[1502]: time="2025-01-30T05:29:24.113170065Z" level=info msg="TearDown network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\" successfully" Jan 30 05:29:24.113979 containerd[1502]: time="2025-01-30T05:29:24.113201495Z" level=info msg="StopPodSandbox for \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\" returns successfully" Jan 30 05:29:24.113979 containerd[1502]: time="2025-01-30T05:29:24.113784634Z" level=info msg="RemovePodSandbox for \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\"" Jan 30 05:29:24.113979 containerd[1502]: time="2025-01-30T05:29:24.113816676Z" level=info msg="Forcibly stopping sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\"" Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.168 [WARNING][5218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4f8dbe13-01dc-4a7e-bada-6efa483db38e", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"3867a52b682911f6f413b03be5ba2aafd49dc38465b81b6050c8ceb8c2680173", Pod:"coredns-668d6bf9bc-qz74f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib738a745f10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.168 [INFO][5218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.168 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" iface="eth0" netns="" Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.168 [INFO][5218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.168 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.202 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.204 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.204 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.208 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.208 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" HandleID="k8s-pod-network.27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-coredns--668d6bf9bc--qz74f-eth0" Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.210 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:24.215332 containerd[1502]: 2025-01-30 05:29:24.212 [INFO][5218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87" Jan 30 05:29:24.216056 containerd[1502]: time="2025-01-30T05:29:24.215366407Z" level=info msg="TearDown network for sandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\" successfully" Jan 30 05:29:24.220560 containerd[1502]: time="2025-01-30T05:29:24.220519519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:29:24.220644 containerd[1502]: time="2025-01-30T05:29:24.220573292Z" level=info msg="RemovePodSandbox \"27404d7ed1130d355e6fbfbe3265c6da06f3998b70c01fbaacbf040da1d6ac87\" returns successfully" Jan 30 05:29:24.221161 containerd[1502]: time="2025-01-30T05:29:24.221102487Z" level=info msg="StopPodSandbox for \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\"" Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.262 [WARNING][5242] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0", GenerateName:"calico-kube-controllers-5c7559cb77-", Namespace:"calico-system", SelfLink:"", UID:"bad96a0b-b018-45a1-96af-9edbd5119f12", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7559cb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b", Pod:"calico-kube-controllers-5c7559cb77-j6lmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966d4ef5036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.262 [INFO][5242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.262 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" iface="eth0" netns="" Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.262 [INFO][5242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.262 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.289 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.289 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.289 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.295 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.295 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.297 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:24.304624 containerd[1502]: 2025-01-30 05:29:24.300 [INFO][5242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:24.305219 containerd[1502]: time="2025-01-30T05:29:24.304669913Z" level=info msg="TearDown network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\" successfully" Jan 30 05:29:24.305219 containerd[1502]: time="2025-01-30T05:29:24.304706151Z" level=info msg="StopPodSandbox for \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\" returns successfully" Jan 30 05:29:24.305542 containerd[1502]: time="2025-01-30T05:29:24.305439850Z" level=info msg="RemovePodSandbox for \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\"" Jan 30 05:29:24.305655 containerd[1502]: time="2025-01-30T05:29:24.305590379Z" level=info msg="Forcibly stopping sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\"" Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.350 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0", GenerateName:"calico-kube-controllers-5c7559cb77-", Namespace:"calico-system", SelfLink:"", UID:"bad96a0b-b018-45a1-96af-9edbd5119f12", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7559cb77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-d-6ba27b8de2", ContainerID:"712682bff7324e8d4cefa06edb7af1d3550d247361d1ae8bafb1a074de504f4b", Pod:"calico-kube-controllers-5c7559cb77-j6lmn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali966d4ef5036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.351 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.351 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" iface="eth0" netns="" Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.351 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.351 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.374 [INFO][5274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.375 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.375 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.383 [WARNING][5274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.383 [INFO][5274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" HandleID="k8s-pod-network.85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Workload="ci--4081--3--0--d--6ba27b8de2-k8s-calico--kube--controllers--5c7559cb77--j6lmn-eth0" Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.385 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:29:24.394790 containerd[1502]: 2025-01-30 05:29:24.391 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480" Jan 30 05:29:24.396083 containerd[1502]: time="2025-01-30T05:29:24.394840461Z" level=info msg="TearDown network for sandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\" successfully" Jan 30 05:29:24.402111 containerd[1502]: time="2025-01-30T05:29:24.402013359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:29:24.402111 containerd[1502]: time="2025-01-30T05:29:24.402086781Z" level=info msg="RemovePodSandbox \"85a607d80368aa0e01cc128faecd6028b0e0f70277163f9799000e71b0f3d480\" returns successfully" Jan 30 05:29:33.547396 kubelet[2720]: I0130 05:29:33.547054 2720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rlwbr" podStartSLOduration=51.197019252 podStartE2EDuration="59.547038444s" podCreationTimestamp="2025-01-30 05:28:34 +0000 UTC" firstStartedPulling="2025-01-30 05:29:07.486367625 +0000 UTC m=+44.625218593" lastFinishedPulling="2025-01-30 05:29:15.836386818 +0000 UTC m=+52.975237785" observedRunningTime="2025-01-30 05:29:16.498114626 +0000 UTC m=+53.636965612" watchObservedRunningTime="2025-01-30 05:29:33.547038444 +0000 UTC m=+70.685889411" Jan 30 05:29:38.061766 kubelet[2720]: I0130 05:29:38.061703 2720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:30:33.419715 systemd[1]: run-containerd-runc-k8s.io-88a59495951db75dacdb5fab342d21492cdb072f252891a52f4be37c66db1207-runc.G8jX2T.mount: Deactivated successfully. Jan 30 05:32:43.299690 systemd[1]: Started sshd@7-128.140.113.241:22-186.10.125.209:14146.service - OpenSSH per-connection server daemon (186.10.125.209:14146). Jan 30 05:32:44.621240 sshd[5715]: Invalid user admin from 186.10.125.209 port 14146 Jan 30 05:32:44.869132 sshd[5715]: Received disconnect from 186.10.125.209 port 14146:11: Bye Bye [preauth] Jan 30 05:32:44.869132 sshd[5715]: Disconnected from invalid user admin 186.10.125.209 port 14146 [preauth] Jan 30 05:32:44.875408 systemd[1]: sshd@7-128.140.113.241:22-186.10.125.209:14146.service: Deactivated successfully. Jan 30 05:33:14.017329 systemd[1]: Started sshd@8-128.140.113.241:22-139.178.89.65:58546.service - OpenSSH per-connection server daemon (139.178.89.65:58546). Jan 30 05:33:15.035786 sshd[5767]: Accepted publickey for core from 139.178.89.65 port 58546 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:15.042075 sshd[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:15.057689 systemd-logind[1473]: New session 8 of user core. Jan 30 05:33:15.069244 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 05:33:16.557062 sshd[5767]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:16.572619 systemd[1]: sshd@8-128.140.113.241:22-139.178.89.65:58546.service: Deactivated successfully. Jan 30 05:33:16.576815 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 05:33:16.581361 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Jan 30 05:33:16.584372 systemd-logind[1473]: Removed session 8. Jan 30 05:33:21.732043 systemd[1]: Started sshd@9-128.140.113.241:22-139.178.89.65:51042.service - OpenSSH per-connection server daemon (139.178.89.65:51042). Jan 30 05:33:22.760264 sshd[5821]: Accepted publickey for core from 139.178.89.65 port 51042 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:22.764379 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:22.770707 systemd-logind[1473]: New session 9 of user core. Jan 30 05:33:22.778127 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 05:33:23.584212 sshd[5821]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:23.590818 systemd[1]: sshd@9-128.140.113.241:22-139.178.89.65:51042.service: Deactivated successfully. Jan 30 05:33:23.596042 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 05:33:23.601031 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Jan 30 05:33:23.603638 systemd-logind[1473]: Removed session 9. Jan 30 05:33:28.760388 systemd[1]: Started sshd@10-128.140.113.241:22-139.178.89.65:51046.service - OpenSSH per-connection server daemon (139.178.89.65:51046). Jan 30 05:33:29.768063 sshd[5838]: Accepted publickey for core from 139.178.89.65 port 51046 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:29.771585 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:29.782359 systemd-logind[1473]: New session 10 of user core. Jan 30 05:33:29.791170 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 05:33:30.583117 sshd[5838]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:30.589776 systemd[1]: sshd@10-128.140.113.241:22-139.178.89.65:51046.service: Deactivated successfully. Jan 30 05:33:30.596469 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 05:33:30.600605 systemd-logind[1473]: Session 10 logged out. Waiting for processes to exit. Jan 30 05:33:30.602788 systemd-logind[1473]: Removed session 10. Jan 30 05:33:35.764497 systemd[1]: Started sshd@11-128.140.113.241:22-139.178.89.65:42836.service - OpenSSH per-connection server daemon (139.178.89.65:42836). Jan 30 05:33:36.772956 sshd[5876]: Accepted publickey for core from 139.178.89.65 port 42836 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:36.774985 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:36.789588 systemd-logind[1473]: New session 11 of user core. Jan 30 05:33:36.799561 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 05:33:37.580819 sshd[5876]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:37.590465 systemd[1]: sshd@11-128.140.113.241:22-139.178.89.65:42836.service: Deactivated successfully. Jan 30 05:33:37.596436 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 05:33:37.600407 systemd-logind[1473]: Session 11 logged out. Waiting for processes to exit. Jan 30 05:33:37.604127 systemd-logind[1473]: Removed session 11. Jan 30 05:33:42.759781 systemd[1]: Started sshd@12-128.140.113.241:22-139.178.89.65:52480.service - OpenSSH per-connection server daemon (139.178.89.65:52480). Jan 30 05:33:43.752471 sshd[5890]: Accepted publickey for core from 139.178.89.65 port 52480 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:43.756644 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:43.768637 systemd-logind[1473]: New session 12 of user core. Jan 30 05:33:43.776252 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 05:33:44.563650 sshd[5890]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:44.571248 systemd[1]: sshd@12-128.140.113.241:22-139.178.89.65:52480.service: Deactivated successfully. Jan 30 05:33:44.577359 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 05:33:44.579238 systemd-logind[1473]: Session 12 logged out. Waiting for processes to exit. Jan 30 05:33:44.583106 systemd-logind[1473]: Removed session 12. Jan 30 05:33:49.744451 systemd[1]: Started sshd@13-128.140.113.241:22-139.178.89.65:52490.service - OpenSSH per-connection server daemon (139.178.89.65:52490). Jan 30 05:33:50.760118 sshd[5929]: Accepted publickey for core from 139.178.89.65 port 52490 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:50.764000 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:50.774250 systemd-logind[1473]: New session 13 of user core. Jan 30 05:33:50.782279 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 05:33:51.539570 sshd[5929]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:51.548505 systemd[1]: sshd@13-128.140.113.241:22-139.178.89.65:52490.service: Deactivated successfully. Jan 30 05:33:51.553998 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 05:33:51.556239 systemd-logind[1473]: Session 13 logged out. Waiting for processes to exit. Jan 30 05:33:51.558482 systemd-logind[1473]: Removed session 13. Jan 30 05:33:56.719274 systemd[1]: Started sshd@14-128.140.113.241:22-139.178.89.65:53520.service - OpenSSH per-connection server daemon (139.178.89.65:53520). Jan 30 05:33:57.725344 sshd[5956]: Accepted publickey for core from 139.178.89.65 port 53520 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:33:57.729630 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:33:57.740815 systemd-logind[1473]: New session 14 of user core. Jan 30 05:33:57.749139 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 05:33:58.542183 sshd[5956]: pam_unix(sshd:session): session closed for user core Jan 30 05:33:58.549066 systemd[1]: sshd@14-128.140.113.241:22-139.178.89.65:53520.service: Deactivated successfully. Jan 30 05:33:58.553775 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 05:33:58.557410 systemd-logind[1473]: Session 14 logged out. Waiting for processes to exit. Jan 30 05:33:58.560456 systemd-logind[1473]: Removed session 14. Jan 30 05:34:03.726572 systemd[1]: Started sshd@15-128.140.113.241:22-139.178.89.65:43262.service - OpenSSH per-connection server daemon (139.178.89.65:43262). Jan 30 05:34:04.762166 sshd[5995]: Accepted publickey for core from 139.178.89.65 port 43262 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:04.766938 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:04.775672 systemd-logind[1473]: New session 15 of user core. Jan 30 05:34:04.783167 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 05:34:05.592672 sshd[5995]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:05.601758 systemd[1]: sshd@15-128.140.113.241:22-139.178.89.65:43262.service: Deactivated successfully. Jan 30 05:34:05.607227 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 05:34:05.608772 systemd-logind[1473]: Session 15 logged out. Waiting for processes to exit. Jan 30 05:34:05.610804 systemd-logind[1473]: Removed session 15. Jan 30 05:34:10.764630 systemd[1]: Started sshd@16-128.140.113.241:22-139.178.89.65:43270.service - OpenSSH per-connection server daemon (139.178.89.65:43270). Jan 30 05:34:11.747583 sshd[6009]: Accepted publickey for core from 139.178.89.65 port 43270 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:11.750538 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:11.758847 systemd-logind[1473]: New session 16 of user core. Jan 30 05:34:11.763135 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 05:34:12.555196 sshd[6009]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:12.563885 systemd[1]: sshd@16-128.140.113.241:22-139.178.89.65:43270.service: Deactivated successfully. Jan 30 05:34:12.569857 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 05:34:12.572344 systemd-logind[1473]: Session 16 logged out. Waiting for processes to exit. Jan 30 05:34:12.575285 systemd-logind[1473]: Removed session 16. Jan 30 05:34:17.484426 systemd[1]: run-containerd-runc-k8s.io-d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262-runc.pqjWSd.mount: Deactivated successfully. Jan 30 05:34:17.734590 systemd[1]: Started sshd@17-128.140.113.241:22-139.178.89.65:46174.service - OpenSSH per-connection server daemon (139.178.89.65:46174). Jan 30 05:34:18.757747 sshd[6061]: Accepted publickey for core from 139.178.89.65 port 46174 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:18.759745 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:18.772192 systemd-logind[1473]: New session 17 of user core. Jan 30 05:34:18.777176 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 05:34:19.592624 sshd[6061]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:19.601878 systemd[1]: sshd@17-128.140.113.241:22-139.178.89.65:46174.service: Deactivated successfully. Jan 30 05:34:19.606684 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 05:34:19.610155 systemd-logind[1473]: Session 17 logged out. Waiting for processes to exit. Jan 30 05:34:19.612793 systemd-logind[1473]: Removed session 17. Jan 30 05:34:24.774704 systemd[1]: Started sshd@18-128.140.113.241:22-139.178.89.65:41350.service - OpenSSH per-connection server daemon (139.178.89.65:41350). Jan 30 05:34:25.791363 sshd[6079]: Accepted publickey for core from 139.178.89.65 port 41350 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:25.795423 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:25.807001 systemd-logind[1473]: New session 18 of user core. Jan 30 05:34:25.817210 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 05:34:26.608193 sshd[6079]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:26.616242 systemd[1]: sshd@18-128.140.113.241:22-139.178.89.65:41350.service: Deactivated successfully. Jan 30 05:34:26.621936 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 05:34:26.625161 systemd-logind[1473]: Session 18 logged out. Waiting for processes to exit. Jan 30 05:34:26.627225 systemd-logind[1473]: Removed session 18. Jan 30 05:34:31.779410 systemd[1]: Started sshd@19-128.140.113.241:22-139.178.89.65:49752.service - OpenSSH per-connection server daemon (139.178.89.65:49752). Jan 30 05:34:32.815860 sshd[6101]: Accepted publickey for core from 139.178.89.65 port 49752 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:32.819990 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:32.830000 systemd-logind[1473]: New session 19 of user core. Jan 30 05:34:32.837271 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 05:34:33.673367 sshd[6101]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:33.680608 systemd[1]: sshd@19-128.140.113.241:22-139.178.89.65:49752.service: Deactivated successfully. Jan 30 05:34:33.685951 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 05:34:33.689033 systemd-logind[1473]: Session 19 logged out. Waiting for processes to exit. Jan 30 05:34:33.691939 systemd-logind[1473]: Removed session 19. Jan 30 05:34:38.853541 systemd[1]: Started sshd@20-128.140.113.241:22-139.178.89.65:49754.service - OpenSSH per-connection server daemon (139.178.89.65:49754). Jan 30 05:34:39.880875 sshd[6140]: Accepted publickey for core from 139.178.89.65 port 49754 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:39.885936 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:39.897329 systemd-logind[1473]: New session 20 of user core. Jan 30 05:34:39.904160 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 05:34:40.673213 sshd[6140]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:40.684757 systemd[1]: sshd@20-128.140.113.241:22-139.178.89.65:49754.service: Deactivated successfully. Jan 30 05:34:40.691178 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 05:34:40.692826 systemd-logind[1473]: Session 20 logged out. Waiting for processes to exit. Jan 30 05:34:40.696067 systemd-logind[1473]: Removed session 20. Jan 30 05:34:45.855596 systemd[1]: Started sshd@21-128.140.113.241:22-139.178.89.65:52196.service - OpenSSH per-connection server daemon (139.178.89.65:52196). Jan 30 05:34:46.870012 sshd[6174]: Accepted publickey for core from 139.178.89.65 port 52196 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:46.873752 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:46.885018 systemd-logind[1473]: New session 21 of user core. Jan 30 05:34:46.894301 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 05:34:47.688092 sshd[6174]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:47.694030 systemd[1]: sshd@21-128.140.113.241:22-139.178.89.65:52196.service: Deactivated successfully. Jan 30 05:34:47.699131 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 05:34:47.702084 systemd-logind[1473]: Session 21 logged out. Waiting for processes to exit. Jan 30 05:34:47.706551 systemd-logind[1473]: Removed session 21. Jan 30 05:34:52.866427 systemd[1]: Started sshd@22-128.140.113.241:22-139.178.89.65:39288.service - OpenSSH per-connection server daemon (139.178.89.65:39288). Jan 30 05:34:53.874264 sshd[6188]: Accepted publickey for core from 139.178.89.65 port 39288 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:34:53.878072 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:34:53.888709 systemd-logind[1473]: New session 22 of user core. Jan 30 05:34:53.893213 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 05:34:54.662807 sshd[6188]: pam_unix(sshd:session): session closed for user core Jan 30 05:34:54.672694 systemd[1]: sshd@22-128.140.113.241:22-139.178.89.65:39288.service: Deactivated successfully. Jan 30 05:34:54.678791 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 05:34:54.680754 systemd-logind[1473]: Session 22 logged out. Waiting for processes to exit. Jan 30 05:34:54.683372 systemd-logind[1473]: Removed session 22. Jan 30 05:34:59.843856 systemd[1]: Started sshd@23-128.140.113.241:22-139.178.89.65:39296.service - OpenSSH per-connection server daemon (139.178.89.65:39296). Jan 30 05:35:00.846825 sshd[6204]: Accepted publickey for core from 139.178.89.65 port 39296 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:00.850437 sshd[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:00.859999 systemd-logind[1473]: New session 23 of user core. Jan 30 05:35:00.866196 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 05:35:01.656595 sshd[6204]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:01.663240 systemd[1]: sshd@23-128.140.113.241:22-139.178.89.65:39296.service: Deactivated successfully. Jan 30 05:35:01.668169 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 05:35:01.672338 systemd-logind[1473]: Session 23 logged out. Waiting for processes to exit. Jan 30 05:35:01.674953 systemd-logind[1473]: Removed session 23. Jan 30 05:35:06.833565 systemd[1]: Started sshd@24-128.140.113.241:22-139.178.89.65:45126.service - OpenSSH per-connection server daemon (139.178.89.65:45126). Jan 30 05:35:07.836866 sshd[6240]: Accepted publickey for core from 139.178.89.65 port 45126 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:07.840141 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:07.850334 systemd-logind[1473]: New session 24 of user core. Jan 30 05:35:07.857196 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 05:35:08.633148 sshd[6240]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:08.641080 systemd[1]: sshd@24-128.140.113.241:22-139.178.89.65:45126.service: Deactivated successfully. Jan 30 05:35:08.647771 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 05:35:08.650212 systemd-logind[1473]: Session 24 logged out. Waiting for processes to exit. Jan 30 05:35:08.653230 systemd-logind[1473]: Removed session 24. Jan 30 05:35:13.815517 systemd[1]: Started sshd@25-128.140.113.241:22-139.178.89.65:32838.service - OpenSSH per-connection server daemon (139.178.89.65:32838). Jan 30 05:35:14.805311 sshd[6255]: Accepted publickey for core from 139.178.89.65 port 32838 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:14.808488 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:14.818073 systemd-logind[1473]: New session 25 of user core. Jan 30 05:35:14.828560 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 05:35:15.517682 systemd[1]: run-containerd-runc-k8s.io-d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262-runc.EH4PDU.mount: Deactivated successfully. Jan 30 05:35:15.613365 sshd[6255]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:15.620544 systemd[1]: sshd@25-128.140.113.241:22-139.178.89.65:32838.service: Deactivated successfully. Jan 30 05:35:15.626547 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 05:35:15.629510 systemd-logind[1473]: Session 25 logged out. Waiting for processes to exit. Jan 30 05:35:15.632253 systemd-logind[1473]: Removed session 25. Jan 30 05:35:20.799024 systemd[1]: Started sshd@26-128.140.113.241:22-139.178.89.65:32842.service - OpenSSH per-connection server daemon (139.178.89.65:32842). Jan 30 05:35:21.816986 sshd[6305]: Accepted publickey for core from 139.178.89.65 port 32842 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:21.820831 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:21.831582 systemd-logind[1473]: New session 26 of user core. Jan 30 05:35:21.839191 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 05:35:22.623236 sshd[6305]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:22.629112 systemd[1]: sshd@26-128.140.113.241:22-139.178.89.65:32842.service: Deactivated successfully. Jan 30 05:35:22.631124 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 05:35:22.633523 systemd-logind[1473]: Session 26 logged out. Waiting for processes to exit. Jan 30 05:35:22.636253 systemd-logind[1473]: Removed session 26. Jan 30 05:35:27.801553 systemd[1]: Started sshd@27-128.140.113.241:22-139.178.89.65:47376.service - OpenSSH per-connection server daemon (139.178.89.65:47376). Jan 30 05:35:28.813680 sshd[6326]: Accepted publickey for core from 139.178.89.65 port 47376 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:28.817224 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:28.828121 systemd-logind[1473]: New session 27 of user core. Jan 30 05:35:28.835413 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 05:35:28.999488 systemd[1]: Started sshd@28-128.140.113.241:22-8.222.176.70:53610.service - OpenSSH per-connection server daemon (8.222.176.70:53610). Jan 30 05:35:29.638837 sshd[6326]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:29.647186 systemd[1]: sshd@27-128.140.113.241:22-139.178.89.65:47376.service: Deactivated successfully. Jan 30 05:35:29.654473 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 05:35:29.657026 systemd-logind[1473]: Session 27 logged out. Waiting for processes to exit. Jan 30 05:35:29.659488 systemd-logind[1473]: Removed session 27. Jan 30 05:35:30.690264 sshd[6344]: Received disconnect from 8.222.176.70 port 53610:11: Bye Bye [preauth] Jan 30 05:35:30.690264 sshd[6344]: Disconnected from authenticating user root 8.222.176.70 port 53610 [preauth] Jan 30 05:35:30.693104 systemd[1]: sshd@28-128.140.113.241:22-8.222.176.70:53610.service: Deactivated successfully. Jan 30 05:35:34.818754 systemd[1]: Started sshd@29-128.140.113.241:22-139.178.89.65:40834.service - OpenSSH per-connection server daemon (139.178.89.65:40834). Jan 30 05:35:35.826421 sshd[6380]: Accepted publickey for core from 139.178.89.65 port 40834 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:35.830217 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:35.838525 systemd-logind[1473]: New session 28 of user core. Jan 30 05:35:35.847151 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 05:35:36.617636 sshd[6380]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:36.625553 systemd[1]: sshd@29-128.140.113.241:22-139.178.89.65:40834.service: Deactivated successfully. Jan 30 05:35:36.631210 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 05:35:36.632992 systemd-logind[1473]: Session 28 logged out. Waiting for processes to exit. Jan 30 05:35:36.635610 systemd-logind[1473]: Removed session 28. Jan 30 05:35:41.803048 systemd[1]: Started sshd@30-128.140.113.241:22-139.178.89.65:53280.service - OpenSSH per-connection server daemon (139.178.89.65:53280). Jan 30 05:35:42.810880 sshd[6394]: Accepted publickey for core from 139.178.89.65 port 53280 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:42.814216 sshd[6394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:42.823858 systemd-logind[1473]: New session 29 of user core. Jan 30 05:35:42.833183 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 05:35:43.585483 sshd[6394]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:43.594768 systemd[1]: sshd@30-128.140.113.241:22-139.178.89.65:53280.service: Deactivated successfully. Jan 30 05:35:43.600804 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 05:35:43.603150 systemd-logind[1473]: Session 29 logged out. Waiting for processes to exit. Jan 30 05:35:43.605453 systemd-logind[1473]: Removed session 29. Jan 30 05:35:48.770377 systemd[1]: Started sshd@31-128.140.113.241:22-139.178.89.65:53286.service - OpenSSH per-connection server daemon (139.178.89.65:53286). Jan 30 05:35:49.779742 sshd[6428]: Accepted publickey for core from 139.178.89.65 port 53286 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:49.783600 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:49.796133 systemd-logind[1473]: New session 30 of user core. Jan 30 05:35:49.803279 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 30 05:35:50.580297 sshd[6428]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:50.588488 systemd[1]: sshd@31-128.140.113.241:22-139.178.89.65:53286.service: Deactivated successfully. Jan 30 05:35:50.594364 systemd[1]: session-30.scope: Deactivated successfully. Jan 30 05:35:50.596182 systemd-logind[1473]: Session 30 logged out. Waiting for processes to exit. Jan 30 05:35:50.598384 systemd-logind[1473]: Removed session 30. Jan 30 05:35:55.760599 systemd[1]: Started sshd@32-128.140.113.241:22-139.178.89.65:37944.service - OpenSSH per-connection server daemon (139.178.89.65:37944). Jan 30 05:35:56.758955 sshd[6441]: Accepted publickey for core from 139.178.89.65 port 37944 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:35:56.762218 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:35:56.777411 systemd-logind[1473]: New session 31 of user core. Jan 30 05:35:56.783647 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 30 05:35:57.534591 sshd[6441]: pam_unix(sshd:session): session closed for user core Jan 30 05:35:57.540703 systemd[1]: sshd@32-128.140.113.241:22-139.178.89.65:37944.service: Deactivated successfully. Jan 30 05:35:57.548401 systemd[1]: session-31.scope: Deactivated successfully. Jan 30 05:35:57.552179 systemd-logind[1473]: Session 31 logged out. Waiting for processes to exit. Jan 30 05:35:57.554130 systemd-logind[1473]: Removed session 31. Jan 30 05:36:01.155678 systemd[1]: Started sshd@33-128.140.113.241:22-178.128.149.80:46262.service - OpenSSH per-connection server daemon (178.128.149.80:46262). Jan 30 05:36:01.701156 sshd[6460]: Invalid user git from 178.128.149.80 port 46262 Jan 30 05:36:01.798144 sshd[6460]: Received disconnect from 178.128.149.80 port 46262:11: Bye Bye [preauth] Jan 30 05:36:01.798144 sshd[6460]: Disconnected from invalid user git 178.128.149.80 port 46262 [preauth] Jan 30 05:36:01.804037 systemd[1]: sshd@33-128.140.113.241:22-178.128.149.80:46262.service: Deactivated successfully. Jan 30 05:36:02.714158 systemd[1]: Started sshd@34-128.140.113.241:22-139.178.89.65:34634.service - OpenSSH per-connection server daemon (139.178.89.65:34634). Jan 30 05:36:03.723322 sshd[6465]: Accepted publickey for core from 139.178.89.65 port 34634 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:03.727644 sshd[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:03.735213 systemd-logind[1473]: New session 32 of user core. Jan 30 05:36:03.741112 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 30 05:36:04.543712 sshd[6465]: pam_unix(sshd:session): session closed for user core Jan 30 05:36:04.550948 systemd[1]: sshd@34-128.140.113.241:22-139.178.89.65:34634.service: Deactivated successfully. Jan 30 05:36:04.557235 systemd[1]: session-32.scope: Deactivated successfully. Jan 30 05:36:04.561547 systemd-logind[1473]: Session 32 logged out. Waiting for processes to exit. Jan 30 05:36:04.563726 systemd-logind[1473]: Removed session 32. Jan 30 05:36:09.719481 systemd[1]: Started sshd@35-128.140.113.241:22-139.178.89.65:34642.service - OpenSSH per-connection server daemon (139.178.89.65:34642). Jan 30 05:36:10.740869 sshd[6502]: Accepted publickey for core from 139.178.89.65 port 34642 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:10.746796 sshd[6502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:10.754786 systemd-logind[1473]: New session 33 of user core. Jan 30 05:36:10.765252 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 30 05:36:11.568860 sshd[6502]: pam_unix(sshd:session): session closed for user core Jan 30 05:36:11.574023 systemd[1]: sshd@35-128.140.113.241:22-139.178.89.65:34642.service: Deactivated successfully. Jan 30 05:36:11.577886 systemd[1]: session-33.scope: Deactivated successfully. Jan 30 05:36:11.580300 systemd-logind[1473]: Session 33 logged out. Waiting for processes to exit. Jan 30 05:36:11.582220 systemd-logind[1473]: Removed session 33. Jan 30 05:36:16.748584 systemd[1]: Started sshd@36-128.140.113.241:22-139.178.89.65:55952.service - OpenSSH per-connection server daemon (139.178.89.65:55952). Jan 30 05:36:17.727809 sshd[6535]: Accepted publickey for core from 139.178.89.65 port 55952 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:17.730110 sshd[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:17.735762 systemd-logind[1473]: New session 34 of user core. Jan 30 05:36:17.742004 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 30 05:36:18.529322 sshd[6535]: pam_unix(sshd:session): session closed for user core Jan 30 05:36:18.537432 systemd[1]: sshd@36-128.140.113.241:22-139.178.89.65:55952.service: Deactivated successfully. Jan 30 05:36:18.542745 systemd[1]: session-34.scope: Deactivated successfully. Jan 30 05:36:18.546727 systemd-logind[1473]: Session 34 logged out. Waiting for processes to exit. Jan 30 05:36:18.549430 systemd-logind[1473]: Removed session 34. Jan 30 05:36:22.507377 systemd[1]: Started sshd@37-128.140.113.241:22-176.10.207.140:56256.service - OpenSSH per-connection server daemon (176.10.207.140:56256). Jan 30 05:36:22.759053 sshd[6570]: Received disconnect from 176.10.207.140 port 56256:11: Bye Bye [preauth] Jan 30 05:36:22.759053 sshd[6570]: Disconnected from authenticating user root 176.10.207.140 port 56256 [preauth] Jan 30 05:36:22.764113 systemd[1]: sshd@37-128.140.113.241:22-176.10.207.140:56256.service: Deactivated successfully. Jan 30 05:36:23.707337 systemd[1]: Started sshd@38-128.140.113.241:22-139.178.89.65:49358.service - OpenSSH per-connection server daemon (139.178.89.65:49358). Jan 30 05:36:24.717622 sshd[6577]: Accepted publickey for core from 139.178.89.65 port 49358 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:24.721306 sshd[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:24.732118 systemd-logind[1473]: New session 35 of user core. Jan 30 05:36:24.738189 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 30 05:36:25.489589 sshd[6577]: pam_unix(sshd:session): session closed for user core Jan 30 05:36:25.497840 systemd[1]: sshd@38-128.140.113.241:22-139.178.89.65:49358.service: Deactivated successfully. Jan 30 05:36:25.504758 systemd[1]: session-35.scope: Deactivated successfully. Jan 30 05:36:25.507078 systemd-logind[1473]: Session 35 logged out. Waiting for processes to exit. Jan 30 05:36:25.509094 systemd-logind[1473]: Removed session 35. Jan 30 05:36:30.668974 systemd[1]: Started sshd@39-128.140.113.241:22-139.178.89.65:49370.service - OpenSSH per-connection server daemon (139.178.89.65:49370). Jan 30 05:36:31.669518 sshd[6593]: Accepted publickey for core from 139.178.89.65 port 49370 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:31.672938 sshd[6593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:31.683287 systemd-logind[1473]: New session 36 of user core. Jan 30 05:36:31.689174 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 30 05:36:32.444882 sshd[6593]: pam_unix(sshd:session): session closed for user core Jan 30 05:36:32.456558 systemd[1]: sshd@39-128.140.113.241:22-139.178.89.65:49370.service: Deactivated successfully. Jan 30 05:36:32.465447 systemd[1]: session-36.scope: Deactivated successfully. Jan 30 05:36:32.468042 systemd-logind[1473]: Session 36 logged out. Waiting for processes to exit. Jan 30 05:36:32.470533 systemd-logind[1473]: Removed session 36. Jan 30 05:36:37.622191 systemd[1]: Started sshd@40-128.140.113.241:22-139.178.89.65:55734.service - OpenSSH per-connection server daemon (139.178.89.65:55734). Jan 30 05:36:38.630019 sshd[6630]: Accepted publickey for core from 139.178.89.65 port 55734 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:38.633848 sshd[6630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:38.645007 systemd-logind[1473]: New session 37 of user core. Jan 30 05:36:38.653237 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 30 05:36:39.460199 sshd[6630]: pam_unix(sshd:session): session closed for user core Jan 30 05:36:39.467797 systemd[1]: sshd@40-128.140.113.241:22-139.178.89.65:55734.service: Deactivated successfully. Jan 30 05:36:39.473726 systemd[1]: session-37.scope: Deactivated successfully. Jan 30 05:36:39.477306 systemd-logind[1473]: Session 37 logged out. Waiting for processes to exit. Jan 30 05:36:39.481397 systemd-logind[1473]: Removed session 37. Jan 30 05:36:44.641041 systemd[1]: Started sshd@41-128.140.113.241:22-139.178.89.65:49016.service - OpenSSH per-connection server daemon (139.178.89.65:49016). Jan 30 05:36:45.646544 sshd[6644]: Accepted publickey for core from 139.178.89.65 port 49016 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:45.650150 sshd[6644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:45.659977 systemd-logind[1473]: New session 38 of user core. Jan 30 05:36:45.672211 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 30 05:36:46.455537 sshd[6644]: pam_unix(sshd:session): session closed for user core Jan 30 05:36:46.465555 systemd[1]: sshd@41-128.140.113.241:22-139.178.89.65:49016.service: Deactivated successfully. Jan 30 05:36:46.469822 systemd[1]: session-38.scope: Deactivated successfully. Jan 30 05:36:46.471661 systemd-logind[1473]: Session 38 logged out. Waiting for processes to exit. Jan 30 05:36:46.473694 systemd-logind[1473]: Removed session 38. Jan 30 05:36:51.637376 systemd[1]: Started sshd@42-128.140.113.241:22-139.178.89.65:36486.service - OpenSSH per-connection server daemon (139.178.89.65:36486). Jan 30 05:36:52.649414 sshd[6677]: Accepted publickey for core from 139.178.89.65 port 36486 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:52.653106 sshd[6677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:52.662670 systemd-logind[1473]: New session 39 of user core. Jan 30 05:36:52.669206 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 30 05:36:53.475467 sshd[6677]: pam_unix(sshd:session): session closed for user core Jan 30 05:36:53.483823 systemd[1]: sshd@42-128.140.113.241:22-139.178.89.65:36486.service: Deactivated successfully. Jan 30 05:36:53.488525 systemd[1]: session-39.scope: Deactivated successfully. Jan 30 05:36:53.491326 systemd-logind[1473]: Session 39 logged out. Waiting for processes to exit. Jan 30 05:36:53.494333 systemd-logind[1473]: Removed session 39. Jan 30 05:36:58.003552 systemd[1]: Started sshd@43-128.140.113.241:22-103.146.159.74:41312.service - OpenSSH per-connection server daemon (103.146.159.74:41312). Jan 30 05:36:58.651244 systemd[1]: Started sshd@44-128.140.113.241:22-139.178.89.65:36498.service - OpenSSH per-connection server daemon (139.178.89.65:36498). Jan 30 05:36:59.281670 sshd[6691]: Invalid user fis from 103.146.159.74 port 41312 Jan 30 05:36:59.524053 sshd[6691]: Received disconnect from 103.146.159.74 port 41312:11: Bye Bye [preauth] Jan 30 05:36:59.524053 sshd[6691]: Disconnected from invalid user fis 103.146.159.74 port 41312 [preauth] Jan 30 05:36:59.529407 systemd[1]: sshd@43-128.140.113.241:22-103.146.159.74:41312.service: Deactivated successfully. Jan 30 05:36:59.627393 sshd[6694]: Accepted publickey for core from 139.178.89.65 port 36498 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:36:59.633632 sshd[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:36:59.644600 systemd-logind[1473]: New session 40 of user core. Jan 30 05:36:59.653170 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 30 05:37:00.437821 sshd[6694]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:00.446366 systemd[1]: sshd@44-128.140.113.241:22-139.178.89.65:36498.service: Deactivated successfully. Jan 30 05:37:00.453126 systemd[1]: session-40.scope: Deactivated successfully. Jan 30 05:37:00.455476 systemd-logind[1473]: Session 40 logged out. Waiting for processes to exit. Jan 30 05:37:00.458781 systemd-logind[1473]: Removed session 40. Jan 30 05:37:05.628422 systemd[1]: Started sshd@45-128.140.113.241:22-139.178.89.65:39316.service - OpenSSH per-connection server daemon (139.178.89.65:39316). Jan 30 05:37:06.644964 sshd[6751]: Accepted publickey for core from 139.178.89.65 port 39316 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:06.648688 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:06.659020 systemd-logind[1473]: New session 41 of user core. Jan 30 05:37:06.667294 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 30 05:37:07.458834 sshd[6751]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:07.466478 systemd[1]: sshd@45-128.140.113.241:22-139.178.89.65:39316.service: Deactivated successfully. Jan 30 05:37:07.471544 systemd[1]: session-41.scope: Deactivated successfully. Jan 30 05:37:07.473224 systemd-logind[1473]: Session 41 logged out. Waiting for processes to exit. Jan 30 05:37:07.475344 systemd-logind[1473]: Removed session 41. Jan 30 05:37:12.644452 systemd[1]: Started sshd@46-128.140.113.241:22-139.178.89.65:45166.service - OpenSSH per-connection server daemon (139.178.89.65:45166). Jan 30 05:37:13.656940 sshd[6764]: Accepted publickey for core from 139.178.89.65 port 45166 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:13.660479 sshd[6764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:13.669858 systemd-logind[1473]: New session 42 of user core. Jan 30 05:37:13.675126 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 30 05:37:14.456230 sshd[6764]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:14.463277 systemd[1]: sshd@46-128.140.113.241:22-139.178.89.65:45166.service: Deactivated successfully. Jan 30 05:37:14.466879 systemd[1]: session-42.scope: Deactivated successfully. Jan 30 05:37:14.468823 systemd-logind[1473]: Session 42 logged out. Waiting for processes to exit. Jan 30 05:37:14.470945 systemd-logind[1473]: Removed session 42. Jan 30 05:37:19.635421 systemd[1]: Started sshd@47-128.140.113.241:22-139.178.89.65:45172.service - OpenSSH per-connection server daemon (139.178.89.65:45172). Jan 30 05:37:20.625878 sshd[6815]: Accepted publickey for core from 139.178.89.65 port 45172 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:20.629459 sshd[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:20.638173 systemd-logind[1473]: New session 43 of user core. Jan 30 05:37:20.645102 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 30 05:37:21.439780 sshd[6815]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:21.449109 systemd[1]: sshd@47-128.140.113.241:22-139.178.89.65:45172.service: Deactivated successfully. Jan 30 05:37:21.456531 systemd[1]: session-43.scope: Deactivated successfully. Jan 30 05:37:21.458499 systemd-logind[1473]: Session 43 logged out. Waiting for processes to exit. Jan 30 05:37:21.460629 systemd-logind[1473]: Removed session 43. Jan 30 05:37:21.617344 systemd[1]: Started sshd@48-128.140.113.241:22-139.178.89.65:33734.service - OpenSSH per-connection server daemon (139.178.89.65:33734). Jan 30 05:37:22.609056 sshd[6828]: Accepted publickey for core from 139.178.89.65 port 33734 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:22.612306 sshd[6828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:22.620967 systemd-logind[1473]: New session 44 of user core. Jan 30 05:37:22.625111 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 30 05:37:23.443947 sshd[6828]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:23.450429 systemd[1]: sshd@48-128.140.113.241:22-139.178.89.65:33734.service: Deactivated successfully. Jan 30 05:37:23.455190 systemd[1]: session-44.scope: Deactivated successfully. Jan 30 05:37:23.458537 systemd-logind[1473]: Session 44 logged out. Waiting for processes to exit. Jan 30 05:37:23.461499 systemd-logind[1473]: Removed session 44. Jan 30 05:37:23.625541 systemd[1]: Started sshd@49-128.140.113.241:22-139.178.89.65:33738.service - OpenSSH per-connection server daemon (139.178.89.65:33738). Jan 30 05:37:24.640040 sshd[6841]: Accepted publickey for core from 139.178.89.65 port 33738 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:24.643425 sshd[6841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:24.652567 systemd-logind[1473]: New session 45 of user core. Jan 30 05:37:24.665183 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 30 05:37:25.452634 sshd[6841]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:25.460366 systemd[1]: sshd@49-128.140.113.241:22-139.178.89.65:33738.service: Deactivated successfully. Jan 30 05:37:25.465665 systemd[1]: session-45.scope: Deactivated successfully. Jan 30 05:37:25.470090 systemd-logind[1473]: Session 45 logged out. Waiting for processes to exit. Jan 30 05:37:25.472698 systemd-logind[1473]: Removed session 45. Jan 30 05:37:30.626536 systemd[1]: Started sshd@50-128.140.113.241:22-139.178.89.65:33754.service - OpenSSH per-connection server daemon (139.178.89.65:33754). Jan 30 05:37:31.657698 sshd[6864]: Accepted publickey for core from 139.178.89.65 port 33754 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:31.661816 sshd[6864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:31.672966 systemd-logind[1473]: New session 46 of user core. Jan 30 05:37:31.679170 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 30 05:37:32.496457 sshd[6864]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:32.505579 systemd[1]: sshd@50-128.140.113.241:22-139.178.89.65:33754.service: Deactivated successfully. Jan 30 05:37:32.510339 systemd[1]: session-46.scope: Deactivated successfully. Jan 30 05:37:32.512824 systemd-logind[1473]: Session 46 logged out. Waiting for processes to exit. Jan 30 05:37:32.515688 systemd-logind[1473]: Removed session 46. Jan 30 05:37:32.549779 systemd[1]: Started sshd@51-128.140.113.241:22-178.128.149.80:46988.service - OpenSSH per-connection server daemon (178.128.149.80:46988). Jan 30 05:37:33.087715 sshd[6877]: Invalid user es from 178.128.149.80 port 46988 Jan 30 05:37:33.182536 sshd[6877]: Received disconnect from 178.128.149.80 port 46988:11: Bye Bye [preauth] Jan 30 05:37:33.182536 sshd[6877]: Disconnected from invalid user es 178.128.149.80 port 46988 [preauth] Jan 30 05:37:33.188458 systemd[1]: sshd@51-128.140.113.241:22-178.128.149.80:46988.service: Deactivated successfully. Jan 30 05:37:37.676398 systemd[1]: Started sshd@52-128.140.113.241:22-139.178.89.65:53818.service - OpenSSH per-connection server daemon (139.178.89.65:53818). Jan 30 05:37:38.723085 sshd[6905]: Accepted publickey for core from 139.178.89.65 port 53818 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:38.728351 sshd[6905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:38.738449 systemd-logind[1473]: New session 47 of user core. Jan 30 05:37:38.743142 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 30 05:37:39.569575 sshd[6905]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:39.577101 systemd[1]: sshd@52-128.140.113.241:22-139.178.89.65:53818.service: Deactivated successfully. Jan 30 05:37:39.582599 systemd[1]: session-47.scope: Deactivated successfully. Jan 30 05:37:39.585704 systemd-logind[1473]: Session 47 logged out. Waiting for processes to exit. Jan 30 05:37:39.588846 systemd-logind[1473]: Removed session 47. Jan 30 05:37:44.757535 systemd[1]: Started sshd@53-128.140.113.241:22-139.178.89.65:37774.service - OpenSSH per-connection server daemon (139.178.89.65:37774). Jan 30 05:37:45.518111 systemd[1]: run-containerd-runc-k8s.io-d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262-runc.k9iTO1.mount: Deactivated successfully. Jan 30 05:37:45.782953 sshd[6919]: Accepted publickey for core from 139.178.89.65 port 37774 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:45.787584 sshd[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:45.797482 systemd-logind[1473]: New session 48 of user core. Jan 30 05:37:45.804312 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 30 05:37:46.618531 sshd[6919]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:46.627303 systemd-logind[1473]: Session 48 logged out. Waiting for processes to exit. Jan 30 05:37:46.627932 systemd[1]: sshd@53-128.140.113.241:22-139.178.89.65:37774.service: Deactivated successfully. Jan 30 05:37:46.633735 systemd[1]: session-48.scope: Deactivated successfully. Jan 30 05:37:46.639158 systemd-logind[1473]: Removed session 48. Jan 30 05:37:49.216401 systemd[1]: Started sshd@54-128.140.113.241:22-186.10.125.209:7343.service - OpenSSH per-connection server daemon (186.10.125.209:7343). Jan 30 05:37:50.568559 sshd[6951]: Invalid user steam from 186.10.125.209 port 7343 Jan 30 05:37:50.828000 sshd[6951]: Received disconnect from 186.10.125.209 port 7343:11: Bye Bye [preauth] Jan 30 05:37:50.828000 sshd[6951]: Disconnected from invalid user steam 186.10.125.209 port 7343 [preauth] Jan 30 05:37:50.834096 systemd[1]: sshd@54-128.140.113.241:22-186.10.125.209:7343.service: Deactivated successfully. Jan 30 05:37:51.799111 systemd[1]: Started sshd@55-128.140.113.241:22-139.178.89.65:41554.service - OpenSSH per-connection server daemon (139.178.89.65:41554). Jan 30 05:37:52.803752 sshd[6956]: Accepted publickey for core from 139.178.89.65 port 41554 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:52.807130 sshd[6956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:52.816414 systemd-logind[1473]: New session 49 of user core. Jan 30 05:37:52.825184 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 30 05:37:53.608342 sshd[6956]: pam_unix(sshd:session): session closed for user core Jan 30 05:37:53.616387 systemd[1]: sshd@55-128.140.113.241:22-139.178.89.65:41554.service: Deactivated successfully. Jan 30 05:37:53.621768 systemd[1]: session-49.scope: Deactivated successfully. Jan 30 05:37:53.623640 systemd-logind[1473]: Session 49 logged out. Waiting for processes to exit. Jan 30 05:37:53.625961 systemd-logind[1473]: Removed session 49. Jan 30 05:37:58.788943 systemd[1]: Started sshd@56-128.140.113.241:22-139.178.89.65:41568.service - OpenSSH per-connection server daemon (139.178.89.65:41568). Jan 30 05:37:59.798366 sshd[6969]: Accepted publickey for core from 139.178.89.65 port 41568 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:37:59.802218 sshd[6969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:37:59.812021 systemd-logind[1473]: New session 50 of user core. Jan 30 05:37:59.820159 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 30 05:38:00.595769 sshd[6969]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:00.604336 systemd[1]: sshd@56-128.140.113.241:22-139.178.89.65:41568.service: Deactivated successfully. Jan 30 05:38:00.608043 systemd[1]: session-50.scope: Deactivated successfully. Jan 30 05:38:00.612413 systemd-logind[1473]: Session 50 logged out. Waiting for processes to exit. Jan 30 05:38:00.615339 systemd-logind[1473]: Removed session 50. Jan 30 05:38:05.771559 systemd[1]: Started sshd@57-128.140.113.241:22-139.178.89.65:57246.service - OpenSSH per-connection server daemon (139.178.89.65:57246). Jan 30 05:38:06.796836 sshd[7005]: Accepted publickey for core from 139.178.89.65 port 57246 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:38:06.800618 sshd[7005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:38:06.809825 systemd-logind[1473]: New session 51 of user core. Jan 30 05:38:06.819220 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 30 05:38:07.660151 sshd[7005]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:07.667040 systemd[1]: sshd@57-128.140.113.241:22-139.178.89.65:57246.service: Deactivated successfully. Jan 30 05:38:07.672857 systemd[1]: session-51.scope: Deactivated successfully. Jan 30 05:38:07.677672 systemd-logind[1473]: Session 51 logged out. Waiting for processes to exit. Jan 30 05:38:07.680751 systemd-logind[1473]: Removed session 51. Jan 30 05:38:12.836456 systemd[1]: Started sshd@58-128.140.113.241:22-139.178.89.65:51310.service - OpenSSH per-connection server daemon (139.178.89.65:51310). Jan 30 05:38:13.827228 sshd[7017]: Accepted publickey for core from 139.178.89.65 port 51310 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:38:13.830615 sshd[7017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:38:13.841614 systemd-logind[1473]: New session 52 of user core. Jan 30 05:38:13.851178 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 30 05:38:14.636769 sshd[7017]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:14.643528 systemd[1]: sshd@58-128.140.113.241:22-139.178.89.65:51310.service: Deactivated successfully. Jan 30 05:38:14.649015 systemd[1]: session-52.scope: Deactivated successfully. Jan 30 05:38:14.652378 systemd-logind[1473]: Session 52 logged out. Waiting for processes to exit. Jan 30 05:38:14.654962 systemd-logind[1473]: Removed session 52. Jan 30 05:38:19.812228 systemd[1]: Started sshd@59-128.140.113.241:22-139.178.89.65:51326.service - OpenSSH per-connection server daemon (139.178.89.65:51326). Jan 30 05:38:20.803497 sshd[7068]: Accepted publickey for core from 139.178.89.65 port 51326 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:38:20.806974 sshd[7068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:38:20.817739 systemd-logind[1473]: New session 53 of user core. Jan 30 05:38:20.824240 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 30 05:38:21.604019 sshd[7068]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:21.609074 systemd[1]: sshd@59-128.140.113.241:22-139.178.89.65:51326.service: Deactivated successfully. Jan 30 05:38:21.613857 systemd[1]: session-53.scope: Deactivated successfully. Jan 30 05:38:21.616650 systemd-logind[1473]: Session 53 logged out. Waiting for processes to exit. Jan 30 05:38:21.618736 systemd-logind[1473]: Removed session 53. Jan 30 05:38:26.780256 systemd[1]: Started sshd@60-128.140.113.241:22-139.178.89.65:36246.service - OpenSSH per-connection server daemon (139.178.89.65:36246). Jan 30 05:38:27.805324 sshd[7083]: Accepted publickey for core from 139.178.89.65 port 36246 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:38:27.809733 sshd[7083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:38:27.820257 systemd-logind[1473]: New session 54 of user core. Jan 30 05:38:27.827196 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 30 05:38:28.592986 sshd[7083]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:28.602359 systemd[1]: sshd@60-128.140.113.241:22-139.178.89.65:36246.service: Deactivated successfully. Jan 30 05:38:28.607869 systemd[1]: session-54.scope: Deactivated successfully. Jan 30 05:38:28.610775 systemd-logind[1473]: Session 54 logged out. Waiting for processes to exit. Jan 30 05:38:28.613369 systemd-logind[1473]: Removed session 54. Jan 30 05:38:33.773638 systemd[1]: Started sshd@61-128.140.113.241:22-139.178.89.65:42364.service - OpenSSH per-connection server daemon (139.178.89.65:42364). Jan 30 05:38:34.821599 sshd[7118]: Accepted publickey for core from 139.178.89.65 port 42364 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:38:34.827448 sshd[7118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:38:34.837113 systemd-logind[1473]: New session 55 of user core. Jan 30 05:38:34.843141 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 30 05:38:35.675372 sshd[7118]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:35.683167 systemd[1]: sshd@61-128.140.113.241:22-139.178.89.65:42364.service: Deactivated successfully. Jan 30 05:38:35.687580 systemd[1]: session-55.scope: Deactivated successfully. Jan 30 05:38:35.689684 systemd-logind[1473]: Session 55 logged out. Waiting for processes to exit. Jan 30 05:38:35.691966 systemd-logind[1473]: Removed session 55. Jan 30 05:38:40.858374 systemd[1]: Started sshd@62-128.140.113.241:22-139.178.89.65:42378.service - OpenSSH per-connection server daemon (139.178.89.65:42378). Jan 30 05:38:40.933392 systemd[1]: Started sshd@63-128.140.113.241:22-178.128.149.80:45036.service - OpenSSH per-connection server daemon (178.128.149.80:45036). Jan 30 05:38:41.477466 sshd[7139]: Invalid user deploy from 178.128.149.80 port 45036 Jan 30 05:38:41.571195 sshd[7139]: Received disconnect from 178.128.149.80 port 45036:11: Bye Bye [preauth] Jan 30 05:38:41.571195 sshd[7139]: Disconnected from invalid user deploy 178.128.149.80 port 45036 [preauth] Jan 30 05:38:41.576529 systemd[1]: sshd@63-128.140.113.241:22-178.128.149.80:45036.service: Deactivated successfully. Jan 30 05:38:41.869225 sshd[7136]: Accepted publickey for core from 139.178.89.65 port 42378 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:38:41.872441 sshd[7136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:38:41.880971 systemd-logind[1473]: New session 56 of user core. Jan 30 05:38:41.891200 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 30 05:38:42.709816 sshd[7136]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:42.717680 systemd[1]: sshd@62-128.140.113.241:22-139.178.89.65:42378.service: Deactivated successfully. Jan 30 05:38:42.722842 systemd[1]: session-56.scope: Deactivated successfully. Jan 30 05:38:42.726308 systemd-logind[1473]: Session 56 logged out. Waiting for processes to exit. Jan 30 05:38:42.728715 systemd-logind[1473]: Removed session 56. Jan 30 05:38:47.887780 systemd[1]: Started sshd@64-128.140.113.241:22-139.178.89.65:46214.service - OpenSSH per-connection server daemon (139.178.89.65:46214). Jan 30 05:38:48.903997 sshd[7185]: Accepted publickey for core from 139.178.89.65 port 46214 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:38:48.907528 sshd[7185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:38:48.918139 systemd-logind[1473]: New session 57 of user core. Jan 30 05:38:48.924155 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 30 05:38:49.710495 sshd[7185]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:49.717804 systemd[1]: sshd@64-128.140.113.241:22-139.178.89.65:46214.service: Deactivated successfully. Jan 30 05:38:49.723379 systemd[1]: session-57.scope: Deactivated successfully. Jan 30 05:38:49.727610 systemd-logind[1473]: Session 57 logged out. Waiting for processes to exit. Jan 30 05:38:49.730167 systemd-logind[1473]: Removed session 57. Jan 30 05:38:54.888362 systemd[1]: Started sshd@65-128.140.113.241:22-139.178.89.65:51280.service - OpenSSH per-connection server daemon (139.178.89.65:51280). Jan 30 05:38:55.889999 sshd[7199]: Accepted publickey for core from 139.178.89.65 port 51280 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:38:55.893559 sshd[7199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:38:55.903061 systemd-logind[1473]: New session 58 of user core. Jan 30 05:38:55.909330 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 30 05:38:56.710785 sshd[7199]: pam_unix(sshd:session): session closed for user core Jan 30 05:38:56.722723 systemd[1]: sshd@65-128.140.113.241:22-139.178.89.65:51280.service: Deactivated successfully. Jan 30 05:38:56.731956 systemd[1]: session-58.scope: Deactivated successfully. Jan 30 05:38:56.740173 systemd-logind[1473]: Session 58 logged out. Waiting for processes to exit. Jan 30 05:38:56.743187 systemd-logind[1473]: Removed session 58. Jan 30 05:39:01.895854 systemd[1]: Started sshd@66-128.140.113.241:22-139.178.89.65:39758.service - OpenSSH per-connection server daemon (139.178.89.65:39758). Jan 30 05:39:02.903312 sshd[7215]: Accepted publickey for core from 139.178.89.65 port 39758 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:02.907637 sshd[7215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:02.917161 systemd-logind[1473]: New session 59 of user core. Jan 30 05:39:02.924107 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 30 05:39:03.736469 sshd[7215]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:03.744450 systemd[1]: sshd@66-128.140.113.241:22-139.178.89.65:39758.service: Deactivated successfully. Jan 30 05:39:03.749608 systemd[1]: session-59.scope: Deactivated successfully. Jan 30 05:39:03.750998 systemd-logind[1473]: Session 59 logged out. Waiting for processes to exit. Jan 30 05:39:03.752940 systemd-logind[1473]: Removed session 59. Jan 30 05:39:08.917483 systemd[1]: Started sshd@67-128.140.113.241:22-139.178.89.65:39764.service - OpenSSH per-connection server daemon (139.178.89.65:39764). Jan 30 05:39:09.956564 sshd[7251]: Accepted publickey for core from 139.178.89.65 port 39764 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:09.959554 sshd[7251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:09.966943 systemd-logind[1473]: New session 60 of user core. Jan 30 05:39:09.977265 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 30 05:39:10.759926 sshd[7251]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:10.769024 systemd[1]: sshd@67-128.140.113.241:22-139.178.89.65:39764.service: Deactivated successfully. Jan 30 05:39:10.774178 systemd[1]: session-60.scope: Deactivated successfully. Jan 30 05:39:10.776112 systemd-logind[1473]: Session 60 logged out. Waiting for processes to exit. Jan 30 05:39:10.779153 systemd-logind[1473]: Removed session 60. Jan 30 05:39:15.943428 systemd[1]: Started sshd@68-128.140.113.241:22-139.178.89.65:51078.service - OpenSSH per-connection server daemon (139.178.89.65:51078). Jan 30 05:39:16.947417 sshd[7283]: Accepted publickey for core from 139.178.89.65 port 51078 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:16.951024 sshd[7283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:16.961637 systemd-logind[1473]: New session 61 of user core. Jan 30 05:39:16.967233 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 30 05:39:17.731831 sshd[7283]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:17.740028 systemd[1]: sshd@68-128.140.113.241:22-139.178.89.65:51078.service: Deactivated successfully. Jan 30 05:39:17.745512 systemd[1]: session-61.scope: Deactivated successfully. Jan 30 05:39:17.748168 systemd-logind[1473]: Session 61 logged out. Waiting for processes to exit. Jan 30 05:39:17.752258 systemd-logind[1473]: Removed session 61. Jan 30 05:39:20.516506 systemd[1]: Started sshd@69-128.140.113.241:22-186.10.125.209:19502.service - OpenSSH per-connection server daemon (186.10.125.209:19502). Jan 30 05:39:21.785756 sshd[7313]: Invalid user test1 from 186.10.125.209 port 19502 Jan 30 05:39:22.028352 sshd[7313]: Received disconnect from 186.10.125.209 port 19502:11: Bye Bye [preauth] Jan 30 05:39:22.028352 sshd[7313]: Disconnected from invalid user test1 186.10.125.209 port 19502 [preauth] Jan 30 05:39:22.034228 systemd[1]: sshd@69-128.140.113.241:22-186.10.125.209:19502.service: Deactivated successfully. Jan 30 05:39:22.910429 systemd[1]: Started sshd@70-128.140.113.241:22-139.178.89.65:60038.service - OpenSSH per-connection server daemon (139.178.89.65:60038). Jan 30 05:39:23.920471 sshd[7318]: Accepted publickey for core from 139.178.89.65 port 60038 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:23.923763 sshd[7318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:23.932258 systemd-logind[1473]: New session 62 of user core. Jan 30 05:39:23.939167 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 30 05:39:24.736543 sshd[7318]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:24.743793 systemd[1]: sshd@70-128.140.113.241:22-139.178.89.65:60038.service: Deactivated successfully. Jan 30 05:39:24.748673 systemd[1]: session-62.scope: Deactivated successfully. Jan 30 05:39:24.749803 systemd-logind[1473]: Session 62 logged out. Waiting for processes to exit. Jan 30 05:39:24.751331 systemd-logind[1473]: Removed session 62. Jan 30 05:39:29.916460 systemd[1]: Started sshd@71-128.140.113.241:22-139.178.89.65:60050.service - OpenSSH per-connection server daemon (139.178.89.65:60050). Jan 30 05:39:30.909998 sshd[7338]: Accepted publickey for core from 139.178.89.65 port 60050 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:30.913374 sshd[7338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:30.923010 systemd-logind[1473]: New session 63 of user core. Jan 30 05:39:30.932118 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 30 05:39:31.658719 sshd[7338]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:31.667112 systemd[1]: sshd@71-128.140.113.241:22-139.178.89.65:60050.service: Deactivated successfully. Jan 30 05:39:31.673155 systemd[1]: session-63.scope: Deactivated successfully. Jan 30 05:39:31.675700 systemd-logind[1473]: Session 63 logged out. Waiting for processes to exit. Jan 30 05:39:31.679323 systemd-logind[1473]: Removed session 63. Jan 30 05:39:33.399591 systemd[1]: run-containerd-runc-k8s.io-88a59495951db75dacdb5fab342d21492cdb072f252891a52f4be37c66db1207-runc.buu2iT.mount: Deactivated successfully. Jan 30 05:39:36.842585 systemd[1]: Started sshd@72-128.140.113.241:22-139.178.89.65:40038.service - OpenSSH per-connection server daemon (139.178.89.65:40038). Jan 30 05:39:37.849140 sshd[7373]: Accepted publickey for core from 139.178.89.65 port 40038 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:37.852461 sshd[7373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:37.862487 systemd-logind[1473]: New session 64 of user core. Jan 30 05:39:37.868139 systemd[1]: Started session-64.scope - Session 64 of User core. Jan 30 05:39:38.669272 sshd[7373]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:38.678685 systemd[1]: sshd@72-128.140.113.241:22-139.178.89.65:40038.service: Deactivated successfully. Jan 30 05:39:38.685926 systemd[1]: session-64.scope: Deactivated successfully. Jan 30 05:39:38.691068 systemd-logind[1473]: Session 64 logged out. Waiting for processes to exit. Jan 30 05:39:38.693241 systemd-logind[1473]: Removed session 64. Jan 30 05:39:43.847453 systemd[1]: Started sshd@73-128.140.113.241:22-139.178.89.65:55134.service - OpenSSH per-connection server daemon (139.178.89.65:55134). Jan 30 05:39:44.852041 sshd[7385]: Accepted publickey for core from 139.178.89.65 port 55134 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:44.855339 sshd[7385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:44.863852 systemd-logind[1473]: New session 65 of user core. Jan 30 05:39:44.868102 systemd[1]: Started session-65.scope - Session 65 of User core. Jan 30 05:39:45.695607 sshd[7385]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:45.704273 systemd[1]: sshd@73-128.140.113.241:22-139.178.89.65:55134.service: Deactivated successfully. Jan 30 05:39:45.710484 systemd[1]: session-65.scope: Deactivated successfully. Jan 30 05:39:45.712455 systemd-logind[1473]: Session 65 logged out. Waiting for processes to exit. Jan 30 05:39:45.714441 systemd-logind[1473]: Removed session 65. Jan 30 05:39:46.094428 systemd[1]: Started sshd@74-128.140.113.241:22-178.128.149.80:43082.service - OpenSSH per-connection server daemon (178.128.149.80:43082). Jan 30 05:39:46.643232 sshd[7424]: Invalid user ftpuser from 178.128.149.80 port 43082 Jan 30 05:39:46.738978 sshd[7424]: Received disconnect from 178.128.149.80 port 43082:11: Bye Bye [preauth] Jan 30 05:39:46.738978 sshd[7424]: Disconnected from invalid user ftpuser 178.128.149.80 port 43082 [preauth] Jan 30 05:39:46.744935 systemd[1]: sshd@74-128.140.113.241:22-178.128.149.80:43082.service: Deactivated successfully. Jan 30 05:39:50.870055 systemd[1]: Started sshd@75-128.140.113.241:22-139.178.89.65:55136.service - OpenSSH per-connection server daemon (139.178.89.65:55136). Jan 30 05:39:51.864977 sshd[7432]: Accepted publickey for core from 139.178.89.65 port 55136 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:51.870128 sshd[7432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:51.879967 systemd-logind[1473]: New session 66 of user core. Jan 30 05:39:51.888136 systemd[1]: Started session-66.scope - Session 66 of User core. Jan 30 05:39:52.685156 sshd[7432]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:52.694779 systemd[1]: sshd@75-128.140.113.241:22-139.178.89.65:55136.service: Deactivated successfully. Jan 30 05:39:52.699797 systemd[1]: session-66.scope: Deactivated successfully. Jan 30 05:39:52.703109 systemd-logind[1473]: Session 66 logged out. Waiting for processes to exit. Jan 30 05:39:52.705396 systemd-logind[1473]: Removed session 66. Jan 30 05:39:57.874530 systemd[1]: Started sshd@76-128.140.113.241:22-139.178.89.65:34350.service - OpenSSH per-connection server daemon (139.178.89.65:34350). Jan 30 05:39:58.880534 sshd[7446]: Accepted publickey for core from 139.178.89.65 port 34350 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:39:58.884383 sshd[7446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:39:58.898449 systemd-logind[1473]: New session 67 of user core. Jan 30 05:39:58.908232 systemd[1]: Started session-67.scope - Session 67 of User core. Jan 30 05:39:59.700582 sshd[7446]: pam_unix(sshd:session): session closed for user core Jan 30 05:39:59.709094 systemd[1]: sshd@76-128.140.113.241:22-139.178.89.65:34350.service: Deactivated successfully. Jan 30 05:39:59.714641 systemd[1]: session-67.scope: Deactivated successfully. Jan 30 05:39:59.716368 systemd-logind[1473]: Session 67 logged out. Waiting for processes to exit. Jan 30 05:39:59.718936 systemd-logind[1473]: Removed session 67. Jan 30 05:40:04.883375 systemd[1]: Started sshd@77-128.140.113.241:22-139.178.89.65:55718.service - OpenSSH per-connection server daemon (139.178.89.65:55718). Jan 30 05:40:05.944580 sshd[7483]: Accepted publickey for core from 139.178.89.65 port 55718 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:40:05.950513 sshd[7483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:40:05.960827 systemd-logind[1473]: New session 68 of user core. Jan 30 05:40:05.968244 systemd[1]: Started session-68.scope - Session 68 of User core. Jan 30 05:40:06.839292 sshd[7483]: pam_unix(sshd:session): session closed for user core Jan 30 05:40:06.849478 systemd[1]: sshd@77-128.140.113.241:22-139.178.89.65:55718.service: Deactivated successfully. Jan 30 05:40:06.855921 systemd[1]: session-68.scope: Deactivated successfully. Jan 30 05:40:06.857577 systemd-logind[1473]: Session 68 logged out. Waiting for processes to exit. Jan 30 05:40:06.859572 systemd-logind[1473]: Removed session 68. Jan 30 05:40:12.026548 systemd[1]: Started sshd@78-128.140.113.241:22-139.178.89.65:46962.service - OpenSSH per-connection server daemon (139.178.89.65:46962). Jan 30 05:40:13.039335 sshd[7501]: Accepted publickey for core from 139.178.89.65 port 46962 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:40:13.042797 sshd[7501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:40:13.054544 systemd-logind[1473]: New session 69 of user core. Jan 30 05:40:13.062220 systemd[1]: Started session-69.scope - Session 69 of User core. Jan 30 05:40:13.874332 sshd[7501]: pam_unix(sshd:session): session closed for user core Jan 30 05:40:13.879809 systemd[1]: sshd@78-128.140.113.241:22-139.178.89.65:46962.service: Deactivated successfully. Jan 30 05:40:13.884357 systemd[1]: session-69.scope: Deactivated successfully. Jan 30 05:40:13.887641 systemd-logind[1473]: Session 69 logged out. Waiting for processes to exit. Jan 30 05:40:13.890194 systemd-logind[1473]: Removed session 69. Jan 30 05:40:14.851402 systemd[1]: Started sshd@79-128.140.113.241:22-176.10.207.140:48352.service - OpenSSH per-connection server daemon (176.10.207.140:48352). Jan 30 05:40:15.073736 sshd[7514]: Invalid user sysadmin from 176.10.207.140 port 48352 Jan 30 05:40:15.102452 sshd[7514]: Received disconnect from 176.10.207.140 port 48352:11: Bye Bye [preauth] Jan 30 05:40:15.102452 sshd[7514]: Disconnected from invalid user sysadmin 176.10.207.140 port 48352 [preauth] Jan 30 05:40:15.108125 systemd[1]: sshd@79-128.140.113.241:22-176.10.207.140:48352.service: Deactivated successfully. Jan 30 05:40:19.054406 systemd[1]: Started sshd@80-128.140.113.241:22-139.178.89.65:46970.service - OpenSSH per-connection server daemon (139.178.89.65:46970). Jan 30 05:40:20.072125 sshd[7570]: Accepted publickey for core from 139.178.89.65 port 46970 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:40:20.075119 sshd[7570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:40:20.085221 systemd-logind[1473]: New session 70 of user core. Jan 30 05:40:20.090179 systemd[1]: Started session-70.scope - Session 70 of User core. Jan 30 05:40:20.928946 sshd[7570]: pam_unix(sshd:session): session closed for user core Jan 30 05:40:20.937412 systemd[1]: sshd@80-128.140.113.241:22-139.178.89.65:46970.service: Deactivated successfully. Jan 30 05:40:20.942743 systemd[1]: session-70.scope: Deactivated successfully. Jan 30 05:40:20.944236 systemd-logind[1473]: Session 70 logged out. Waiting for processes to exit. Jan 30 05:40:20.946234 systemd-logind[1473]: Removed session 70. Jan 30 05:40:26.109406 systemd[1]: Started sshd@81-128.140.113.241:22-139.178.89.65:42494.service - OpenSSH per-connection server daemon (139.178.89.65:42494). Jan 30 05:40:27.124838 sshd[7586]: Accepted publickey for core from 139.178.89.65 port 42494 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:40:27.128286 sshd[7586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:40:27.138797 systemd-logind[1473]: New session 71 of user core. Jan 30 05:40:27.143305 systemd[1]: Started session-71.scope - Session 71 of User core. Jan 30 05:40:27.896295 sshd[7586]: pam_unix(sshd:session): session closed for user core Jan 30 05:40:27.902981 systemd[1]: sshd@81-128.140.113.241:22-139.178.89.65:42494.service: Deactivated successfully. Jan 30 05:40:27.909078 systemd[1]: session-71.scope: Deactivated successfully. Jan 30 05:40:27.912673 systemd-logind[1473]: Session 71 logged out. Waiting for processes to exit. Jan 30 05:40:27.915977 systemd-logind[1473]: Removed session 71. Jan 30 05:40:33.082643 systemd[1]: Started sshd@82-128.140.113.241:22-139.178.89.65:44122.service - OpenSSH per-connection server daemon (139.178.89.65:44122). Jan 30 05:40:34.099686 sshd[7602]: Accepted publickey for core from 139.178.89.65 port 44122 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:40:34.103691 sshd[7602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:40:34.113342 systemd-logind[1473]: New session 72 of user core. Jan 30 05:40:34.123148 systemd[1]: Started session-72.scope - Session 72 of User core. Jan 30 05:40:34.956813 sshd[7602]: pam_unix(sshd:session): session closed for user core Jan 30 05:40:34.963660 systemd[1]: sshd@82-128.140.113.241:22-139.178.89.65:44122.service: Deactivated successfully. Jan 30 05:40:34.968631 systemd[1]: session-72.scope: Deactivated successfully. Jan 30 05:40:34.971678 systemd-logind[1473]: Session 72 logged out. Waiting for processes to exit. Jan 30 05:40:34.974207 systemd-logind[1473]: Removed session 72. Jan 30 05:40:40.136435 systemd[1]: Started sshd@83-128.140.113.241:22-139.178.89.65:44132.service - OpenSSH per-connection server daemon (139.178.89.65:44132). Jan 30 05:40:40.156533 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 30 05:40:40.220424 systemd-tmpfiles[7638]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 05:40:40.221789 systemd-tmpfiles[7638]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 05:40:40.224010 systemd-tmpfiles[7638]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 05:40:40.224551 systemd-tmpfiles[7638]: ACLs are not supported, ignoring. Jan 30 05:40:40.224712 systemd-tmpfiles[7638]: ACLs are not supported, ignoring. Jan 30 05:40:40.236106 systemd-tmpfiles[7638]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:40:40.236129 systemd-tmpfiles[7638]: Skipping /boot Jan 30 05:40:40.255751 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 30 05:40:40.256429 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 30 05:40:41.170188 sshd[7637]: Accepted publickey for core from 139.178.89.65 port 44132 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:40:41.173985 sshd[7637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:40:41.185001 systemd-logind[1473]: New session 73 of user core. Jan 30 05:40:41.191171 systemd[1]: Started session-73.scope - Session 73 of User core. Jan 30 05:40:41.993849 sshd[7637]: pam_unix(sshd:session): session closed for user core Jan 30 05:40:42.002087 systemd[1]: sshd@83-128.140.113.241:22-139.178.89.65:44132.service: Deactivated successfully. Jan 30 05:40:42.007420 systemd[1]: session-73.scope: Deactivated successfully. Jan 30 05:40:42.009112 systemd-logind[1473]: Session 73 logged out. Waiting for processes to exit. Jan 30 05:40:42.011266 systemd-logind[1473]: Removed session 73. Jan 30 05:40:47.167430 systemd[1]: Started sshd@84-128.140.113.241:22-139.178.89.65:54638.service - OpenSSH per-connection server daemon (139.178.89.65:54638). Jan 30 05:40:48.152971 sshd[7669]: Accepted publickey for core from 139.178.89.65 port 54638 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:40:48.156625 sshd[7669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:40:48.166986 systemd-logind[1473]: New session 74 of user core. Jan 30 05:40:48.176235 systemd[1]: Started session-74.scope - Session 74 of User core. Jan 30 05:40:48.951728 sshd[7669]: pam_unix(sshd:session): session closed for user core Jan 30 05:40:48.960715 systemd[1]: sshd@84-128.140.113.241:22-139.178.89.65:54638.service: Deactivated successfully. Jan 30 05:40:48.964765 systemd[1]: session-74.scope: Deactivated successfully. Jan 30 05:40:48.966448 systemd-logind[1473]: Session 74 logged out. Waiting for processes to exit. Jan 30 05:40:48.969001 systemd-logind[1473]: Removed session 74. Jan 30 05:40:50.019390 systemd[1]: Started sshd@85-128.140.113.241:22-178.128.149.80:41124.service - OpenSSH per-connection server daemon (178.128.149.80:41124). Jan 30 05:40:50.570146 sshd[7682]: Invalid user ftpuser from 178.128.149.80 port 41124 Jan 30 05:40:50.664117 sshd[7682]: Received disconnect from 178.128.149.80 port 41124:11: Bye Bye [preauth] Jan 30 05:40:50.664117 sshd[7682]: Disconnected from invalid user ftpuser 178.128.149.80 port 41124 [preauth] Jan 30 05:40:50.671180 systemd[1]: sshd@85-128.140.113.241:22-178.128.149.80:41124.service: Deactivated successfully. Jan 30 05:40:53.333464 systemd[1]: Started sshd@86-128.140.113.241:22-186.10.125.209:32702.service - OpenSSH per-connection server daemon (186.10.125.209:32702). Jan 30 05:40:54.132387 systemd[1]: Started sshd@87-128.140.113.241:22-139.178.89.65:34346.service - OpenSSH per-connection server daemon (139.178.89.65:34346). Jan 30 05:40:54.620404 sshd[7687]: Invalid user sammy from 186.10.125.209 port 32702 Jan 30 05:40:54.863338 sshd[7687]: Received disconnect from 186.10.125.209 port 32702:11: Bye Bye [preauth] Jan 30 05:40:54.863338 sshd[7687]: Disconnected from invalid user sammy 186.10.125.209 port 32702 [preauth] Jan 30 05:40:54.869430 systemd[1]: sshd@86-128.140.113.241:22-186.10.125.209:32702.service: Deactivated successfully. Jan 30 05:40:55.131016 sshd[7690]: Accepted publickey for core from 139.178.89.65 port 34346 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:40:55.134795 sshd[7690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:40:55.146299 systemd-logind[1473]: New session 75 of user core. Jan 30 05:40:55.152149 systemd[1]: Started session-75.scope - Session 75 of User core. Jan 30 05:40:55.936166 sshd[7690]: pam_unix(sshd:session): session closed for user core Jan 30 05:40:55.943101 systemd[1]: sshd@87-128.140.113.241:22-139.178.89.65:34346.service: Deactivated successfully. Jan 30 05:40:55.947971 systemd[1]: session-75.scope: Deactivated successfully. Jan 30 05:40:55.951319 systemd-logind[1473]: Session 75 logged out. Waiting for processes to exit. Jan 30 05:40:55.954012 systemd-logind[1473]: Removed session 75. Jan 30 05:41:01.118440 systemd[1]: Started sshd@88-128.140.113.241:22-139.178.89.65:34352.service - OpenSSH per-connection server daemon (139.178.89.65:34352). Jan 30 05:41:02.119561 sshd[7707]: Accepted publickey for core from 139.178.89.65 port 34352 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:02.122843 sshd[7707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:02.133153 systemd-logind[1473]: New session 76 of user core. Jan 30 05:41:02.144268 systemd[1]: Started session-76.scope - Session 76 of User core. Jan 30 05:41:02.915279 sshd[7707]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:02.923016 systemd[1]: sshd@88-128.140.113.241:22-139.178.89.65:34352.service: Deactivated successfully. Jan 30 05:41:02.927330 systemd[1]: session-76.scope: Deactivated successfully. Jan 30 05:41:02.931634 systemd-logind[1473]: Session 76 logged out. Waiting for processes to exit. Jan 30 05:41:02.933868 systemd-logind[1473]: Removed session 76. Jan 30 05:41:07.403505 systemd[1]: Started sshd@89-128.140.113.241:22-103.146.159.74:52946.service - OpenSSH per-connection server daemon (103.146.159.74:52946). Jan 30 05:41:08.096151 systemd[1]: Started sshd@90-128.140.113.241:22-139.178.89.65:60352.service - OpenSSH per-connection server daemon (139.178.89.65:60352). Jan 30 05:41:09.120700 sshd[7743]: Accepted publickey for core from 139.178.89.65 port 60352 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:09.126318 sshd[7743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:09.134310 systemd-logind[1473]: New session 77 of user core. Jan 30 05:41:09.144129 systemd[1]: Started session-77.scope - Session 77 of User core. Jan 30 05:41:10.066647 sshd[7743]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:10.078587 systemd[1]: sshd@90-128.140.113.241:22-139.178.89.65:60352.service: Deactivated successfully. Jan 30 05:41:10.084499 systemd[1]: session-77.scope: Deactivated successfully. Jan 30 05:41:10.086646 systemd-logind[1473]: Session 77 logged out. Waiting for processes to exit. Jan 30 05:41:10.089070 systemd-logind[1473]: Removed session 77. Jan 30 05:41:15.244391 systemd[1]: Started sshd@91-128.140.113.241:22-139.178.89.65:54360.service - OpenSSH per-connection server daemon (139.178.89.65:54360). Jan 30 05:41:15.516143 systemd[1]: run-containerd-runc-k8s.io-d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262-runc.6ekpJ5.mount: Deactivated successfully. Jan 30 05:41:16.235278 sshd[7756]: Accepted publickey for core from 139.178.89.65 port 54360 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:16.239129 sshd[7756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:16.248653 systemd-logind[1473]: New session 78 of user core. Jan 30 05:41:16.255250 systemd[1]: Started session-78.scope - Session 78 of User core. Jan 30 05:41:17.058961 sshd[7756]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:17.071387 systemd-logind[1473]: Session 78 logged out. Waiting for processes to exit. Jan 30 05:41:17.072497 systemd[1]: sshd@91-128.140.113.241:22-139.178.89.65:54360.service: Deactivated successfully. Jan 30 05:41:17.080017 systemd[1]: session-78.scope: Deactivated successfully. Jan 30 05:41:17.086311 systemd-logind[1473]: Removed session 78. Jan 30 05:41:22.233321 systemd[1]: Started sshd@92-128.140.113.241:22-139.178.89.65:52856.service - OpenSSH per-connection server daemon (139.178.89.65:52856). Jan 30 05:41:23.238773 sshd[7808]: Accepted publickey for core from 139.178.89.65 port 52856 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:23.242309 sshd[7808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:23.252432 systemd-logind[1473]: New session 79 of user core. Jan 30 05:41:23.259244 systemd[1]: Started session-79.scope - Session 79 of User core. Jan 30 05:41:24.078206 sshd[7808]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:24.088954 systemd[1]: sshd@92-128.140.113.241:22-139.178.89.65:52856.service: Deactivated successfully. Jan 30 05:41:24.094504 systemd[1]: session-79.scope: Deactivated successfully. Jan 30 05:41:24.097350 systemd-logind[1473]: Session 79 logged out. Waiting for processes to exit. Jan 30 05:41:24.100488 systemd-logind[1473]: Removed session 79. Jan 30 05:41:25.200372 systemd[1]: Started sshd@93-128.140.113.241:22-176.10.207.140:34868.service - OpenSSH per-connection server daemon (176.10.207.140:34868). Jan 30 05:41:25.450060 sshd[7822]: Invalid user vpn from 176.10.207.140 port 34868 Jan 30 05:41:25.483809 sshd[7822]: Received disconnect from 176.10.207.140 port 34868:11: Bye Bye [preauth] Jan 30 05:41:25.483809 sshd[7822]: Disconnected from invalid user vpn 176.10.207.140 port 34868 [preauth] Jan 30 05:41:25.487877 systemd[1]: sshd@93-128.140.113.241:22-176.10.207.140:34868.service: Deactivated successfully. Jan 30 05:41:29.257341 systemd[1]: Started sshd@94-128.140.113.241:22-139.178.89.65:52868.service - OpenSSH per-connection server daemon (139.178.89.65:52868). Jan 30 05:41:30.272459 sshd[7829]: Accepted publickey for core from 139.178.89.65 port 52868 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:30.278344 sshd[7829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:30.288027 systemd-logind[1473]: New session 80 of user core. Jan 30 05:41:30.296179 systemd[1]: Started session-80.scope - Session 80 of User core. Jan 30 05:41:31.114347 sshd[7829]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:31.122335 systemd[1]: sshd@94-128.140.113.241:22-139.178.89.65:52868.service: Deactivated successfully. Jan 30 05:41:31.129240 systemd[1]: session-80.scope: Deactivated successfully. Jan 30 05:41:31.131876 systemd-logind[1473]: Session 80 logged out. Waiting for processes to exit. Jan 30 05:41:31.134224 systemd-logind[1473]: Removed session 80. Jan 30 05:41:31.297477 systemd[1]: Started sshd@95-128.140.113.241:22-139.178.89.65:58010.service - OpenSSH per-connection server daemon (139.178.89.65:58010). Jan 30 05:41:32.300180 sshd[7843]: Accepted publickey for core from 139.178.89.65 port 58010 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:32.304179 sshd[7843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:32.315729 systemd-logind[1473]: New session 81 of user core. Jan 30 05:41:32.322408 systemd[1]: Started session-81.scope - Session 81 of User core. Jan 30 05:41:33.432199 sshd[7843]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:33.441472 systemd[1]: sshd@95-128.140.113.241:22-139.178.89.65:58010.service: Deactivated successfully. Jan 30 05:41:33.444720 systemd[1]: session-81.scope: Deactivated successfully. Jan 30 05:41:33.449269 systemd-logind[1473]: Session 81 logged out. Waiting for processes to exit. Jan 30 05:41:33.452213 systemd-logind[1473]: Removed session 81. Jan 30 05:41:33.607553 systemd[1]: Started sshd@96-128.140.113.241:22-139.178.89.65:58012.service - OpenSSH per-connection server daemon (139.178.89.65:58012). Jan 30 05:41:34.650413 sshd[7876]: Accepted publickey for core from 139.178.89.65 port 58012 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:34.656624 sshd[7876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:34.668191 systemd-logind[1473]: New session 82 of user core. Jan 30 05:41:34.673186 systemd[1]: Started session-82.scope - Session 82 of User core. Jan 30 05:41:36.537430 sshd[7876]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:36.544351 systemd[1]: sshd@96-128.140.113.241:22-139.178.89.65:58012.service: Deactivated successfully. Jan 30 05:41:36.549562 systemd[1]: session-82.scope: Deactivated successfully. Jan 30 05:41:36.553595 systemd-logind[1473]: Session 82 logged out. Waiting for processes to exit. Jan 30 05:41:36.556278 systemd-logind[1473]: Removed session 82. Jan 30 05:41:36.710824 systemd[1]: Started sshd@97-128.140.113.241:22-139.178.89.65:58014.service - OpenSSH per-connection server daemon (139.178.89.65:58014). Jan 30 05:41:37.708755 sshd[7897]: Accepted publickey for core from 139.178.89.65 port 58014 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:37.713408 sshd[7897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:37.723582 systemd-logind[1473]: New session 83 of user core. Jan 30 05:41:37.733209 systemd[1]: Started session-83.scope - Session 83 of User core. Jan 30 05:41:38.990329 sshd[7897]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:39.001840 systemd[1]: sshd@97-128.140.113.241:22-139.178.89.65:58014.service: Deactivated successfully. Jan 30 05:41:39.009368 systemd[1]: session-83.scope: Deactivated successfully. Jan 30 05:41:39.011738 systemd-logind[1473]: Session 83 logged out. Waiting for processes to exit. Jan 30 05:41:39.017010 systemd-logind[1473]: Removed session 83. Jan 30 05:41:39.166601 systemd[1]: Started sshd@98-128.140.113.241:22-139.178.89.65:58022.service - OpenSSH per-connection server daemon (139.178.89.65:58022). Jan 30 05:41:40.196398 sshd[7908]: Accepted publickey for core from 139.178.89.65 port 58022 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:40.200586 sshd[7908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:40.212081 systemd-logind[1473]: New session 84 of user core. Jan 30 05:41:40.217206 systemd[1]: Started session-84.scope - Session 84 of User core. Jan 30 05:41:40.999968 sshd[7908]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:41.007811 systemd[1]: sshd@98-128.140.113.241:22-139.178.89.65:58022.service: Deactivated successfully. Jan 30 05:41:41.013199 systemd[1]: session-84.scope: Deactivated successfully. Jan 30 05:41:41.016565 systemd-logind[1473]: Session 84 logged out. Waiting for processes to exit. Jan 30 05:41:41.019325 systemd-logind[1473]: Removed session 84. Jan 30 05:41:46.179855 systemd[1]: Started sshd@99-128.140.113.241:22-139.178.89.65:56084.service - OpenSSH per-connection server daemon (139.178.89.65:56084). Jan 30 05:41:47.186391 sshd[7946]: Accepted publickey for core from 139.178.89.65 port 56084 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:47.190072 sshd[7946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:47.199751 systemd-logind[1473]: New session 85 of user core. Jan 30 05:41:47.208151 systemd[1]: Started session-85.scope - Session 85 of User core. Jan 30 05:41:47.988319 sshd[7946]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:47.997218 systemd[1]: sshd@99-128.140.113.241:22-139.178.89.65:56084.service: Deactivated successfully. Jan 30 05:41:48.002945 systemd[1]: session-85.scope: Deactivated successfully. Jan 30 05:41:48.004744 systemd-logind[1473]: Session 85 logged out. Waiting for processes to exit. Jan 30 05:41:48.007245 systemd-logind[1473]: Removed session 85. Jan 30 05:41:53.174479 systemd[1]: Started sshd@100-128.140.113.241:22-139.178.89.65:49484.service - OpenSSH per-connection server daemon (139.178.89.65:49484). Jan 30 05:41:54.193571 sshd[7972]: Accepted publickey for core from 139.178.89.65 port 49484 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:41:54.197360 sshd[7972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:41:54.208135 systemd-logind[1473]: New session 86 of user core. Jan 30 05:41:54.217217 systemd[1]: Started session-86.scope - Session 86 of User core. Jan 30 05:41:54.999181 sshd[7972]: pam_unix(sshd:session): session closed for user core Jan 30 05:41:55.009279 systemd[1]: sshd@100-128.140.113.241:22-139.178.89.65:49484.service: Deactivated successfully. Jan 30 05:41:55.015723 systemd[1]: session-86.scope: Deactivated successfully. Jan 30 05:41:55.017429 systemd-logind[1473]: Session 86 logged out. Waiting for processes to exit. Jan 30 05:41:55.019538 systemd-logind[1473]: Removed session 86. Jan 30 05:41:55.274498 systemd[1]: Started sshd@101-128.140.113.241:22-92.118.39.86:32982.service - OpenSSH per-connection server daemon (92.118.39.86:32982). Jan 30 05:41:55.368225 sshd[7986]: Connection closed by 92.118.39.86 port 32982 Jan 30 05:41:55.371475 systemd[1]: sshd@101-128.140.113.241:22-92.118.39.86:32982.service: Deactivated successfully. Jan 30 05:41:57.080604 systemd[1]: Started sshd@102-128.140.113.241:22-178.128.149.80:39170.service - OpenSSH per-connection server daemon (178.128.149.80:39170). Jan 30 05:41:57.630779 sshd[7990]: Invalid user deploy from 178.128.149.80 port 39170 Jan 30 05:41:57.728412 sshd[7990]: Received disconnect from 178.128.149.80 port 39170:11: Bye Bye [preauth] Jan 30 05:41:57.728412 sshd[7990]: Disconnected from invalid user deploy 178.128.149.80 port 39170 [preauth] Jan 30 05:41:57.733967 systemd[1]: sshd@102-128.140.113.241:22-178.128.149.80:39170.service: Deactivated successfully. Jan 30 05:42:00.179438 systemd[1]: Started sshd@103-128.140.113.241:22-139.178.89.65:49496.service - OpenSSH per-connection server daemon (139.178.89.65:49496). Jan 30 05:42:01.175568 sshd[7997]: Accepted publickey for core from 139.178.89.65 port 49496 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:01.178188 sshd[7997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:01.186723 systemd-logind[1473]: New session 87 of user core. Jan 30 05:42:01.199312 systemd[1]: Started session-87.scope - Session 87 of User core. Jan 30 05:42:01.977354 sshd[7997]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:01.985816 systemd[1]: sshd@103-128.140.113.241:22-139.178.89.65:49496.service: Deactivated successfully. Jan 30 05:42:01.990984 systemd[1]: session-87.scope: Deactivated successfully. Jan 30 05:42:01.992588 systemd-logind[1473]: Session 87 logged out. Waiting for processes to exit. Jan 30 05:42:01.994145 systemd-logind[1473]: Removed session 87. Jan 30 05:42:07.153356 systemd[1]: Started sshd@104-128.140.113.241:22-139.178.89.65:42872.service - OpenSSH per-connection server daemon (139.178.89.65:42872). Jan 30 05:42:08.177162 sshd[8032]: Accepted publickey for core from 139.178.89.65 port 42872 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:08.182171 sshd[8032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:08.192246 systemd-logind[1473]: New session 88 of user core. Jan 30 05:42:08.197215 systemd[1]: Started session-88.scope - Session 88 of User core. Jan 30 05:42:09.077741 sshd[8032]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:09.084087 systemd[1]: sshd@104-128.140.113.241:22-139.178.89.65:42872.service: Deactivated successfully. Jan 30 05:42:09.090714 systemd[1]: session-88.scope: Deactivated successfully. Jan 30 05:42:09.094088 systemd-logind[1473]: Session 88 logged out. Waiting for processes to exit. Jan 30 05:42:09.096929 systemd-logind[1473]: Removed session 88. Jan 30 05:42:14.257599 systemd[1]: Started sshd@105-128.140.113.241:22-139.178.89.65:58816.service - OpenSSH per-connection server daemon (139.178.89.65:58816). Jan 30 05:42:15.264207 sshd[8045]: Accepted publickey for core from 139.178.89.65 port 58816 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:15.268029 sshd[8045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:15.276799 systemd-logind[1473]: New session 89 of user core. Jan 30 05:42:15.284227 systemd[1]: Started session-89.scope - Session 89 of User core. Jan 30 05:42:16.042964 sshd[8045]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:16.049653 systemd[1]: sshd@105-128.140.113.241:22-139.178.89.65:58816.service: Deactivated successfully. Jan 30 05:42:16.057140 systemd[1]: session-89.scope: Deactivated successfully. Jan 30 05:42:16.066451 systemd-logind[1473]: Session 89 logged out. Waiting for processes to exit. Jan 30 05:42:16.070876 systemd-logind[1473]: Removed session 89. Jan 30 05:42:21.225393 systemd[1]: Started sshd@106-128.140.113.241:22-139.178.89.65:58820.service - OpenSSH per-connection server daemon (139.178.89.65:58820). Jan 30 05:42:22.224732 sshd[8095]: Accepted publickey for core from 139.178.89.65 port 58820 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:22.229467 sshd[8095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:22.241916 systemd-logind[1473]: New session 90 of user core. Jan 30 05:42:22.244591 systemd[1]: Started session-90.scope - Session 90 of User core. Jan 30 05:42:23.061370 sshd[8095]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:23.068931 systemd-logind[1473]: Session 90 logged out. Waiting for processes to exit. Jan 30 05:42:23.076742 systemd[1]: sshd@106-128.140.113.241:22-139.178.89.65:58820.service: Deactivated successfully. Jan 30 05:42:23.084292 systemd[1]: session-90.scope: Deactivated successfully. Jan 30 05:42:23.087004 systemd-logind[1473]: Removed session 90. Jan 30 05:42:25.353481 systemd[1]: Started sshd@107-128.140.113.241:22-186.10.125.209:18338.service - OpenSSH per-connection server daemon (186.10.125.209:18338). Jan 30 05:42:26.626215 sshd[8110]: Invalid user server from 186.10.125.209 port 18338 Jan 30 05:42:26.858494 sshd[8110]: Received disconnect from 186.10.125.209 port 18338:11: Bye Bye [preauth] Jan 30 05:42:26.858494 sshd[8110]: Disconnected from invalid user server 186.10.125.209 port 18338 [preauth] Jan 30 05:42:26.863829 systemd[1]: sshd@107-128.140.113.241:22-186.10.125.209:18338.service: Deactivated successfully. Jan 30 05:42:28.239125 systemd[1]: Started sshd@108-128.140.113.241:22-139.178.89.65:42516.service - OpenSSH per-connection server daemon (139.178.89.65:42516). Jan 30 05:42:29.238513 sshd[8115]: Accepted publickey for core from 139.178.89.65 port 42516 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:29.244571 sshd[8115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:29.254273 systemd-logind[1473]: New session 91 of user core. Jan 30 05:42:29.260171 systemd[1]: Started session-91.scope - Session 91 of User core. Jan 30 05:42:30.110607 sshd[8115]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:30.118048 systemd[1]: sshd@108-128.140.113.241:22-139.178.89.65:42516.service: Deactivated successfully. Jan 30 05:42:30.124450 systemd[1]: session-91.scope: Deactivated successfully. Jan 30 05:42:30.126358 systemd-logind[1473]: Session 91 logged out. Waiting for processes to exit. Jan 30 05:42:30.128198 systemd-logind[1473]: Removed session 91. Jan 30 05:42:35.298653 systemd[1]: Started sshd@109-128.140.113.241:22-139.178.89.65:49464.service - OpenSSH per-connection server daemon (139.178.89.65:49464). Jan 30 05:42:36.021486 systemd[1]: Started sshd@110-128.140.113.241:22-176.10.207.140:44804.service - OpenSSH per-connection server daemon (176.10.207.140:44804). Jan 30 05:42:36.235636 sshd[8155]: Invalid user ftpmedia from 176.10.207.140 port 44804 Jan 30 05:42:36.264883 sshd[8155]: Received disconnect from 176.10.207.140 port 44804:11: Bye Bye [preauth] Jan 30 05:42:36.264883 sshd[8155]: Disconnected from invalid user ftpmedia 176.10.207.140 port 44804 [preauth] Jan 30 05:42:36.270293 systemd[1]: sshd@110-128.140.113.241:22-176.10.207.140:44804.service: Deactivated successfully. Jan 30 05:42:36.322307 sshd[8152]: Accepted publickey for core from 139.178.89.65 port 49464 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:36.327712 sshd[8152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:36.339695 systemd-logind[1473]: New session 92 of user core. Jan 30 05:42:36.345261 systemd[1]: Started session-92.scope - Session 92 of User core. Jan 30 05:42:37.556396 sshd[8152]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:37.564539 systemd[1]: sshd@109-128.140.113.241:22-139.178.89.65:49464.service: Deactivated successfully. Jan 30 05:42:37.570170 systemd[1]: session-92.scope: Deactivated successfully. Jan 30 05:42:37.572049 systemd-logind[1473]: Session 92 logged out. Waiting for processes to exit. Jan 30 05:42:37.574071 systemd-logind[1473]: Removed session 92. Jan 30 05:42:42.732503 systemd[1]: Started sshd@111-128.140.113.241:22-139.178.89.65:56742.service - OpenSSH per-connection server daemon (139.178.89.65:56742). Jan 30 05:42:43.763155 sshd[8170]: Accepted publickey for core from 139.178.89.65 port 56742 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:43.766544 sshd[8170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:43.778010 systemd-logind[1473]: New session 93 of user core. Jan 30 05:42:43.782573 systemd[1]: Started session-93.scope - Session 93 of User core. Jan 30 05:42:44.660962 sshd[8170]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:44.670937 systemd[1]: sshd@111-128.140.113.241:22-139.178.89.65:56742.service: Deactivated successfully. Jan 30 05:42:44.678841 systemd[1]: session-93.scope: Deactivated successfully. Jan 30 05:42:44.680835 systemd-logind[1473]: Session 93 logged out. Waiting for processes to exit. Jan 30 05:42:44.682951 systemd-logind[1473]: Removed session 93. Jan 30 05:42:45.512735 systemd[1]: run-containerd-runc-k8s.io-d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262-runc.qTHYdj.mount: Deactivated successfully. Jan 30 05:42:49.841485 systemd[1]: Started sshd@112-128.140.113.241:22-139.178.89.65:56750.service - OpenSSH per-connection server daemon (139.178.89.65:56750). Jan 30 05:42:50.851995 sshd[8211]: Accepted publickey for core from 139.178.89.65 port 56750 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:50.857007 sshd[8211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:50.869735 systemd-logind[1473]: New session 94 of user core. Jan 30 05:42:50.874211 systemd[1]: Started session-94.scope - Session 94 of User core. Jan 30 05:42:51.683455 sshd[8211]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:51.692354 systemd[1]: sshd@112-128.140.113.241:22-139.178.89.65:56750.service: Deactivated successfully. Jan 30 05:42:51.699083 systemd[1]: session-94.scope: Deactivated successfully. Jan 30 05:42:51.701601 systemd-logind[1473]: Session 94 logged out. Waiting for processes to exit. Jan 30 05:42:51.704520 systemd-logind[1473]: Removed session 94. Jan 30 05:42:56.868808 systemd[1]: Started sshd@113-128.140.113.241:22-139.178.89.65:44716.service - OpenSSH per-connection server daemon (139.178.89.65:44716). Jan 30 05:42:57.871292 sshd[8227]: Accepted publickey for core from 139.178.89.65 port 44716 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:42:57.875588 sshd[8227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:42:57.886363 systemd-logind[1473]: New session 95 of user core. Jan 30 05:42:57.891158 systemd[1]: Started session-95.scope - Session 95 of User core. Jan 30 05:42:58.697542 sshd[8227]: pam_unix(sshd:session): session closed for user core Jan 30 05:42:58.704054 systemd[1]: sshd@113-128.140.113.241:22-139.178.89.65:44716.service: Deactivated successfully. Jan 30 05:42:58.711404 systemd[1]: session-95.scope: Deactivated successfully. Jan 30 05:42:58.720708 systemd-logind[1473]: Session 95 logged out. Waiting for processes to exit. Jan 30 05:42:58.723147 systemd-logind[1473]: Removed session 95. Jan 30 05:43:01.073550 systemd[1]: Started sshd@114-128.140.113.241:22-103.146.159.74:51542.service - OpenSSH per-connection server daemon (103.146.159.74:51542). Jan 30 05:43:02.368208 sshd[8242]: Invalid user xyh from 103.146.159.74 port 51542 Jan 30 05:43:02.610664 sshd[8242]: Received disconnect from 103.146.159.74 port 51542:11: Bye Bye [preauth] Jan 30 05:43:02.610664 sshd[8242]: Disconnected from invalid user xyh 103.146.159.74 port 51542 [preauth] Jan 30 05:43:02.614442 systemd[1]: sshd@114-128.140.113.241:22-103.146.159.74:51542.service: Deactivated successfully. Jan 30 05:43:03.881639 systemd[1]: Started sshd@115-128.140.113.241:22-139.178.89.65:51518.service - OpenSSH per-connection server daemon (139.178.89.65:51518). Jan 30 05:43:04.036460 systemd[1]: Started sshd@116-128.140.113.241:22-178.128.149.80:37218.service - OpenSSH per-connection server daemon (178.128.149.80:37218). Jan 30 05:43:04.591724 sshd[8271]: Invalid user ftpuser from 178.128.149.80 port 37218 Jan 30 05:43:04.686500 sshd[8271]: Received disconnect from 178.128.149.80 port 37218:11: Bye Bye [preauth] Jan 30 05:43:04.686500 sshd[8271]: Disconnected from invalid user ftpuser 178.128.149.80 port 37218 [preauth] Jan 30 05:43:04.691935 systemd[1]: sshd@116-128.140.113.241:22-178.128.149.80:37218.service: Deactivated successfully. Jan 30 05:43:04.901120 sshd[8268]: Accepted publickey for core from 139.178.89.65 port 51518 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:43:04.904005 sshd[8268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:43:04.914170 systemd-logind[1473]: New session 96 of user core. Jan 30 05:43:04.920139 systemd[1]: Started session-96.scope - Session 96 of User core. Jan 30 05:43:05.715238 sshd[8268]: pam_unix(sshd:session): session closed for user core Jan 30 05:43:05.722061 systemd[1]: sshd@115-128.140.113.241:22-139.178.89.65:51518.service: Deactivated successfully. Jan 30 05:43:05.726824 systemd[1]: session-96.scope: Deactivated successfully. Jan 30 05:43:05.729606 systemd-logind[1473]: Session 96 logged out. Waiting for processes to exit. Jan 30 05:43:05.732936 systemd-logind[1473]: Removed session 96. Jan 30 05:43:07.469734 systemd[1]: sshd@89-128.140.113.241:22-103.146.159.74:52946.service: Deactivated successfully. Jan 30 05:43:10.893622 systemd[1]: Started sshd@117-128.140.113.241:22-139.178.89.65:51532.service - OpenSSH per-connection server daemon (139.178.89.65:51532). Jan 30 05:43:11.914921 sshd[8288]: Accepted publickey for core from 139.178.89.65 port 51532 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:43:11.919052 sshd[8288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:43:11.929281 systemd-logind[1473]: New session 97 of user core. Jan 30 05:43:11.937210 systemd[1]: Started session-97.scope - Session 97 of User core. Jan 30 05:43:12.734688 sshd[8288]: pam_unix(sshd:session): session closed for user core Jan 30 05:43:12.743529 systemd[1]: sshd@117-128.140.113.241:22-139.178.89.65:51532.service: Deactivated successfully. Jan 30 05:43:12.749680 systemd[1]: session-97.scope: Deactivated successfully. Jan 30 05:43:12.751789 systemd-logind[1473]: Session 97 logged out. Waiting for processes to exit. Jan 30 05:43:12.754601 systemd-logind[1473]: Removed session 97. Jan 30 05:43:17.474927 systemd[1]: run-containerd-runc-k8s.io-d1b8a56aa1426f80e4f2b3d18ecd7ff233c350ab7fe99544a4f260cc0e169262-runc.EF3h6K.mount: Deactivated successfully. Jan 30 05:43:17.913588 systemd[1]: Started sshd@118-128.140.113.241:22-139.178.89.65:34856.service - OpenSSH per-connection server daemon (139.178.89.65:34856). Jan 30 05:43:18.955718 sshd[8345]: Accepted publickey for core from 139.178.89.65 port 34856 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:43:18.961163 sshd[8345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:43:18.970806 systemd-logind[1473]: New session 98 of user core. Jan 30 05:43:18.977160 systemd[1]: Started session-98.scope - Session 98 of User core. Jan 30 05:43:20.137816 sshd[8345]: pam_unix(sshd:session): session closed for user core Jan 30 05:43:20.145770 systemd[1]: sshd@118-128.140.113.241:22-139.178.89.65:34856.service: Deactivated successfully. Jan 30 05:43:20.150983 systemd[1]: session-98.scope: Deactivated successfully. Jan 30 05:43:20.154612 systemd-logind[1473]: Session 98 logged out. Waiting for processes to exit. Jan 30 05:43:20.157711 systemd-logind[1473]: Removed session 98. Jan 30 05:43:25.318709 systemd[1]: Started sshd@119-128.140.113.241:22-139.178.89.65:46168.service - OpenSSH per-connection server daemon (139.178.89.65:46168). Jan 30 05:43:26.316448 sshd[8360]: Accepted publickey for core from 139.178.89.65 port 46168 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:43:26.319883 sshd[8360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:43:26.329068 systemd-logind[1473]: New session 99 of user core. Jan 30 05:43:26.339156 systemd[1]: Started session-99.scope - Session 99 of User core. Jan 30 05:43:27.101808 sshd[8360]: pam_unix(sshd:session): session closed for user core Jan 30 05:43:27.112484 systemd[1]: sshd@119-128.140.113.241:22-139.178.89.65:46168.service: Deactivated successfully. Jan 30 05:43:27.117724 systemd[1]: session-99.scope: Deactivated successfully. Jan 30 05:43:27.119530 systemd-logind[1473]: Session 99 logged out. Waiting for processes to exit. Jan 30 05:43:27.121678 systemd-logind[1473]: Removed session 99. Jan 30 05:43:32.279752 systemd[1]: Started sshd@120-128.140.113.241:22-139.178.89.65:35936.service - OpenSSH per-connection server daemon (139.178.89.65:35936). Jan 30 05:43:33.279635 sshd[8387]: Accepted publickey for core from 139.178.89.65 port 35936 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:43:33.282274 sshd[8387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:43:33.288587 systemd-logind[1473]: New session 100 of user core. Jan 30 05:43:33.302228 systemd[1]: Started session-100.scope - Session 100 of User core. Jan 30 05:43:34.147428 sshd[8387]: pam_unix(sshd:session): session closed for user core Jan 30 05:43:34.156031 systemd[1]: sshd@120-128.140.113.241:22-139.178.89.65:35936.service: Deactivated successfully. Jan 30 05:43:34.160419 systemd[1]: session-100.scope: Deactivated successfully. Jan 30 05:43:34.163381 systemd-logind[1473]: Session 100 logged out. Waiting for processes to exit. Jan 30 05:43:34.165551 systemd-logind[1473]: Removed session 100. Jan 30 05:43:39.330404 systemd[1]: Started sshd@121-128.140.113.241:22-139.178.89.65:35940.service - OpenSSH per-connection server daemon (139.178.89.65:35940). Jan 30 05:43:40.357280 sshd[8426]: Accepted publickey for core from 139.178.89.65 port 35940 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:43:40.360877 sshd[8426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:43:40.370347 systemd-logind[1473]: New session 101 of user core. Jan 30 05:43:40.376232 systemd[1]: Started session-101.scope - Session 101 of User core. Jan 30 05:43:41.202757 sshd[8426]: pam_unix(sshd:session): session closed for user core Jan 30 05:43:41.211019 systemd[1]: sshd@121-128.140.113.241:22-139.178.89.65:35940.service: Deactivated successfully. Jan 30 05:43:41.215749 systemd[1]: session-101.scope: Deactivated successfully. Jan 30 05:43:41.219089 systemd-logind[1473]: Session 101 logged out. Waiting for processes to exit. Jan 30 05:43:41.221073 systemd-logind[1473]: Removed session 101. Jan 30 05:43:45.380775 systemd[1]: Started sshd@122-128.140.113.241:22-176.10.207.140:39620.service - OpenSSH per-connection server daemon (176.10.207.140:39620). Jan 30 05:43:45.601466 sshd[8439]: Invalid user remoto from 176.10.207.140 port 39620 Jan 30 05:43:45.631312 sshd[8439]: Received disconnect from 176.10.207.140 port 39620:11: Bye Bye [preauth] Jan 30 05:43:45.631312 sshd[8439]: Disconnected from invalid user remoto 176.10.207.140 port 39620 [preauth] Jan 30 05:43:45.637696 systemd[1]: sshd@122-128.140.113.241:22-176.10.207.140:39620.service: Deactivated successfully. Jan 30 05:43:46.386431 systemd[1]: Started sshd@123-128.140.113.241:22-139.178.89.65:51252.service - OpenSSH per-connection server daemon (139.178.89.65:51252). Jan 30 05:43:47.390181 sshd[8464]: Accepted publickey for core from 139.178.89.65 port 51252 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:43:47.393595 sshd[8464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:43:47.404604 systemd-logind[1473]: New session 102 of user core. Jan 30 05:43:47.412222 systemd[1]: Started session-102.scope - Session 102 of User core. Jan 30 05:43:48.213559 sshd[8464]: pam_unix(sshd:session): session closed for user core Jan 30 05:43:48.223165 systemd[1]: sshd@123-128.140.113.241:22-139.178.89.65:51252.service: Deactivated successfully. Jan 30 05:43:48.230248 systemd[1]: session-102.scope: Deactivated successfully. Jan 30 05:43:48.232591 systemd-logind[1473]: Session 102 logged out. Waiting for processes to exit. Jan 30 05:43:48.234654 systemd-logind[1473]: Removed session 102. Jan 30 05:43:53.382926 systemd[1]: Started sshd@124-128.140.113.241:22-139.178.89.65:50396.service - OpenSSH per-connection server daemon (139.178.89.65:50396). Jan 30 05:43:54.380805 sshd[8477]: Accepted publickey for core from 139.178.89.65 port 50396 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:43:54.384141 sshd[8477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:43:54.393013 systemd-logind[1473]: New session 103 of user core. Jan 30 05:43:54.399182 systemd[1]: Started session-103.scope - Session 103 of User core. Jan 30 05:43:55.219010 sshd[8477]: pam_unix(sshd:session): session closed for user core Jan 30 05:43:55.227027 systemd[1]: sshd@124-128.140.113.241:22-139.178.89.65:50396.service: Deactivated successfully. Jan 30 05:43:55.232753 systemd[1]: session-103.scope: Deactivated successfully. Jan 30 05:43:55.236218 systemd-logind[1473]: Session 103 logged out. Waiting for processes to exit. Jan 30 05:43:55.239325 systemd-logind[1473]: Removed session 103. Jan 30 05:43:55.413347 systemd[1]: Started sshd@125-128.140.113.241:22-186.10.125.209:18145.service - OpenSSH per-connection server daemon (186.10.125.209:18145). Jan 30 05:43:56.700808 sshd[8490]: Invalid user sammy from 186.10.125.209 port 18145 Jan 30 05:43:56.943952 sshd[8490]: Received disconnect from 186.10.125.209 port 18145:11: Bye Bye [preauth] Jan 30 05:43:56.943952 sshd[8490]: Disconnected from invalid user sammy 186.10.125.209 port 18145 [preauth] Jan 30 05:43:56.947326 systemd[1]: sshd@125-128.140.113.241:22-186.10.125.209:18145.service: Deactivated successfully. Jan 30 05:43:58.077246 systemd[1]: Started sshd@126-128.140.113.241:22-201.15.45.18:39388.service - OpenSSH per-connection server daemon (201.15.45.18:39388). Jan 30 05:43:59.273696 sshd[8495]: Invalid user guest from 201.15.45.18 port 39388 Jan 30 05:43:59.490412 sshd[8495]: Received disconnect from 201.15.45.18 port 39388:11: Bye Bye [preauth] Jan 30 05:43:59.490412 sshd[8495]: Disconnected from invalid user guest 201.15.45.18 port 39388 [preauth] Jan 30 05:43:59.496322 systemd[1]: sshd@126-128.140.113.241:22-201.15.45.18:39388.service: Deactivated successfully. Jan 30 05:44:00.404463 systemd[1]: Started sshd@127-128.140.113.241:22-139.178.89.65:50404.service - OpenSSH per-connection server daemon (139.178.89.65:50404). Jan 30 05:44:01.409949 sshd[8502]: Accepted publickey for core from 139.178.89.65 port 50404 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:01.412462 sshd[8502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:01.421386 systemd-logind[1473]: New session 104 of user core. Jan 30 05:44:01.427178 systemd[1]: Started session-104.scope - Session 104 of User core. Jan 30 05:44:02.218279 sshd[8502]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:02.225653 systemd[1]: sshd@127-128.140.113.241:22-139.178.89.65:50404.service: Deactivated successfully. Jan 30 05:44:02.231690 systemd[1]: session-104.scope: Deactivated successfully. Jan 30 05:44:02.235659 systemd-logind[1473]: Session 104 logged out. Waiting for processes to exit. Jan 30 05:44:02.238503 systemd-logind[1473]: Removed session 104. Jan 30 05:44:07.388675 systemd[1]: Started sshd@128-128.140.113.241:22-139.178.89.65:40678.service - OpenSSH per-connection server daemon (139.178.89.65:40678). Jan 30 05:44:08.386586 sshd[8536]: Accepted publickey for core from 139.178.89.65 port 40678 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:08.390787 sshd[8536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:08.403403 systemd-logind[1473]: New session 105 of user core. Jan 30 05:44:08.409306 systemd[1]: Started session-105.scope - Session 105 of User core. Jan 30 05:44:09.201387 sshd[8536]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:09.208077 systemd[1]: sshd@128-128.140.113.241:22-139.178.89.65:40678.service: Deactivated successfully. Jan 30 05:44:09.212649 systemd[1]: session-105.scope: Deactivated successfully. Jan 30 05:44:09.218200 systemd-logind[1473]: Session 105 logged out. Waiting for processes to exit. Jan 30 05:44:09.223009 systemd-logind[1473]: Removed session 105. Jan 30 05:44:11.600314 systemd[1]: Started sshd@129-128.140.113.241:22-178.128.149.80:35264.service - OpenSSH per-connection server daemon (178.128.149.80:35264). Jan 30 05:44:12.139094 sshd[8549]: Invalid user es from 178.128.149.80 port 35264 Jan 30 05:44:12.235047 sshd[8549]: Received disconnect from 178.128.149.80 port 35264:11: Bye Bye [preauth] Jan 30 05:44:12.235047 sshd[8549]: Disconnected from invalid user es 178.128.149.80 port 35264 [preauth] Jan 30 05:44:12.240632 systemd[1]: sshd@129-128.140.113.241:22-178.128.149.80:35264.service: Deactivated successfully. Jan 30 05:44:14.384615 systemd[1]: Started sshd@130-128.140.113.241:22-139.178.89.65:32818.service - OpenSSH per-connection server daemon (139.178.89.65:32818). Jan 30 05:44:15.389201 sshd[8554]: Accepted publickey for core from 139.178.89.65 port 32818 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:15.392696 sshd[8554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:15.402658 systemd-logind[1473]: New session 106 of user core. Jan 30 05:44:15.409235 systemd[1]: Started session-106.scope - Session 106 of User core. Jan 30 05:44:16.191693 sshd[8554]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:16.198626 systemd[1]: sshd@130-128.140.113.241:22-139.178.89.65:32818.service: Deactivated successfully. Jan 30 05:44:16.204548 systemd[1]: session-106.scope: Deactivated successfully. Jan 30 05:44:16.207712 systemd-logind[1473]: Session 106 logged out. Waiting for processes to exit. Jan 30 05:44:16.211001 systemd-logind[1473]: Removed session 106. Jan 30 05:44:21.376441 systemd[1]: Started sshd@131-128.140.113.241:22-139.178.89.65:57170.service - OpenSSH per-connection server daemon (139.178.89.65:57170). Jan 30 05:44:22.395092 sshd[8605]: Accepted publickey for core from 139.178.89.65 port 57170 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:22.399852 sshd[8605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:22.409243 systemd-logind[1473]: New session 107 of user core. Jan 30 05:44:22.417145 systemd[1]: Started session-107.scope - Session 107 of User core. Jan 30 05:44:23.199415 sshd[8605]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:23.209815 systemd[1]: sshd@131-128.140.113.241:22-139.178.89.65:57170.service: Deactivated successfully. Jan 30 05:44:23.216870 systemd[1]: session-107.scope: Deactivated successfully. Jan 30 05:44:23.222776 systemd-logind[1473]: Session 107 logged out. Waiting for processes to exit. Jan 30 05:44:23.225359 systemd-logind[1473]: Removed session 107. Jan 30 05:44:28.378763 systemd[1]: Started sshd@132-128.140.113.241:22-139.178.89.65:57180.service - OpenSSH per-connection server daemon (139.178.89.65:57180). Jan 30 05:44:29.386592 sshd[8620]: Accepted publickey for core from 139.178.89.65 port 57180 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:29.390775 sshd[8620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:29.402032 systemd-logind[1473]: New session 108 of user core. Jan 30 05:44:29.407208 systemd[1]: Started session-108.scope - Session 108 of User core. Jan 30 05:44:30.172185 sshd[8620]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:30.178618 systemd[1]: sshd@132-128.140.113.241:22-139.178.89.65:57180.service: Deactivated successfully. Jan 30 05:44:30.184019 systemd[1]: session-108.scope: Deactivated successfully. Jan 30 05:44:30.187401 systemd-logind[1473]: Session 108 logged out. Waiting for processes to exit. Jan 30 05:44:30.189937 systemd-logind[1473]: Removed session 108. Jan 30 05:44:35.352508 systemd[1]: Started sshd@133-128.140.113.241:22-139.178.89.65:44584.service - OpenSSH per-connection server daemon (139.178.89.65:44584). Jan 30 05:44:36.368102 sshd[8658]: Accepted publickey for core from 139.178.89.65 port 44584 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:36.370109 sshd[8658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:36.377016 systemd-logind[1473]: New session 109 of user core. Jan 30 05:44:36.384751 systemd[1]: Started session-109.scope - Session 109 of User core. Jan 30 05:44:37.172238 sshd[8658]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:37.178562 systemd[1]: sshd@133-128.140.113.241:22-139.178.89.65:44584.service: Deactivated successfully. Jan 30 05:44:37.183759 systemd[1]: session-109.scope: Deactivated successfully. Jan 30 05:44:37.187642 systemd-logind[1473]: Session 109 logged out. Waiting for processes to exit. Jan 30 05:44:37.190261 systemd-logind[1473]: Removed session 109. Jan 30 05:44:42.347353 systemd[1]: Started sshd@134-128.140.113.241:22-139.178.89.65:60558.service - OpenSSH per-connection server daemon (139.178.89.65:60558). Jan 30 05:44:43.351084 sshd[8671]: Accepted publickey for core from 139.178.89.65 port 60558 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:43.355614 sshd[8671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:43.369415 systemd-logind[1473]: New session 110 of user core. Jan 30 05:44:43.377252 systemd[1]: Started session-110.scope - Session 110 of User core. Jan 30 05:44:44.164358 sshd[8671]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:44.170525 systemd[1]: sshd@134-128.140.113.241:22-139.178.89.65:60558.service: Deactivated successfully. Jan 30 05:44:44.176627 systemd[1]: session-110.scope: Deactivated successfully. Jan 30 05:44:44.180971 systemd-logind[1473]: Session 110 logged out. Waiting for processes to exit. Jan 30 05:44:44.184170 systemd-logind[1473]: Removed session 110. Jan 30 05:44:49.347480 systemd[1]: Started sshd@135-128.140.113.241:22-139.178.89.65:60574.service - OpenSSH per-connection server daemon (139.178.89.65:60574). Jan 30 05:44:50.361766 sshd[8708]: Accepted publickey for core from 139.178.89.65 port 60574 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:50.365473 sshd[8708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:50.376198 systemd-logind[1473]: New session 111 of user core. Jan 30 05:44:50.382207 systemd[1]: Started session-111.scope - Session 111 of User core. Jan 30 05:44:51.204071 sshd[8708]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:51.212538 systemd[1]: sshd@135-128.140.113.241:22-139.178.89.65:60574.service: Deactivated successfully. Jan 30 05:44:51.218215 systemd[1]: session-111.scope: Deactivated successfully. Jan 30 05:44:51.220219 systemd-logind[1473]: Session 111 logged out. Waiting for processes to exit. Jan 30 05:44:51.222666 systemd-logind[1473]: Removed session 111. Jan 30 05:44:51.671565 systemd[1]: Started sshd@136-128.140.113.241:22-103.146.159.74:49282.service - OpenSSH per-connection server daemon (103.146.159.74:49282). Jan 30 05:44:53.590296 systemd[1]: Started sshd@137-128.140.113.241:22-176.10.207.140:60960.service - OpenSSH per-connection server daemon (176.10.207.140:60960). Jan 30 05:44:53.635746 sshd[8721]: Invalid user zhangxc from 103.146.159.74 port 49282 Jan 30 05:44:53.797027 sshd[8724]: Invalid user nmr from 176.10.207.140 port 60960 Jan 30 05:44:53.827760 sshd[8724]: Received disconnect from 176.10.207.140 port 60960:11: Bye Bye [preauth] Jan 30 05:44:53.827760 sshd[8724]: Disconnected from invalid user nmr 176.10.207.140 port 60960 [preauth] Jan 30 05:44:53.831709 systemd[1]: sshd@137-128.140.113.241:22-176.10.207.140:60960.service: Deactivated successfully. Jan 30 05:44:53.915431 sshd[8721]: Received disconnect from 103.146.159.74 port 49282:11: Bye Bye [preauth] Jan 30 05:44:53.915431 sshd[8721]: Disconnected from invalid user zhangxc 103.146.159.74 port 49282 [preauth] Jan 30 05:44:53.919459 systemd[1]: sshd@136-128.140.113.241:22-103.146.159.74:49282.service: Deactivated successfully. Jan 30 05:44:56.382405 systemd[1]: Started sshd@138-128.140.113.241:22-139.178.89.65:38858.service - OpenSSH per-connection server daemon (139.178.89.65:38858). Jan 30 05:44:57.352933 sshd[8732]: Accepted publickey for core from 139.178.89.65 port 38858 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:44:57.355872 sshd[8732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:44:57.365009 systemd-logind[1473]: New session 112 of user core. Jan 30 05:44:57.372090 systemd[1]: Started session-112.scope - Session 112 of User core. Jan 30 05:44:58.149984 sshd[8732]: pam_unix(sshd:session): session closed for user core Jan 30 05:44:58.158283 systemd[1]: sshd@138-128.140.113.241:22-139.178.89.65:38858.service: Deactivated successfully. Jan 30 05:44:58.163404 systemd[1]: session-112.scope: Deactivated successfully. Jan 30 05:44:58.164997 systemd-logind[1473]: Session 112 logged out. Waiting for processes to exit. Jan 30 05:44:58.167137 systemd-logind[1473]: Removed session 112. Jan 30 05:45:03.335349 systemd[1]: Started sshd@139-128.140.113.241:22-139.178.89.65:39112.service - OpenSSH per-connection server daemon (139.178.89.65:39112). Jan 30 05:45:04.346766 sshd[8759]: Accepted publickey for core from 139.178.89.65 port 39112 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:45:04.350541 sshd[8759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:45:04.360351 systemd-logind[1473]: New session 113 of user core. Jan 30 05:45:04.367179 systemd[1]: Started session-113.scope - Session 113 of User core. Jan 30 05:45:05.165044 sshd[8759]: pam_unix(sshd:session): session closed for user core Jan 30 05:45:05.171730 systemd[1]: sshd@139-128.140.113.241:22-139.178.89.65:39112.service: Deactivated successfully. Jan 30 05:45:05.178353 systemd[1]: session-113.scope: Deactivated successfully. Jan 30 05:45:05.182685 systemd-logind[1473]: Session 113 logged out. Waiting for processes to exit. Jan 30 05:45:05.185261 systemd-logind[1473]: Removed session 113. Jan 30 05:45:10.342395 systemd[1]: Started sshd@140-128.140.113.241:22-139.178.89.65:39114.service - OpenSSH per-connection server daemon (139.178.89.65:39114). Jan 30 05:45:11.344937 sshd[8794]: Accepted publickey for core from 139.178.89.65 port 39114 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:45:11.347350 sshd[8794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:45:11.357155 systemd-logind[1473]: New session 114 of user core. Jan 30 05:45:11.362333 systemd[1]: Started session-114.scope - Session 114 of User core. Jan 30 05:45:12.145700 sshd[8794]: pam_unix(sshd:session): session closed for user core Jan 30 05:45:12.152503 systemd[1]: sshd@140-128.140.113.241:22-139.178.89.65:39114.service: Deactivated successfully. Jan 30 05:45:12.157479 systemd[1]: session-114.scope: Deactivated successfully. Jan 30 05:45:12.158858 systemd-logind[1473]: Session 114 logged out. Waiting for processes to exit. Jan 30 05:45:12.161456 systemd-logind[1473]: Removed session 114. Jan 30 05:45:17.152607 systemd[1]: Started sshd@141-128.140.113.241:22-178.128.149.80:33310.service - OpenSSH per-connection server daemon (178.128.149.80:33310). Jan 30 05:45:17.328697 systemd[1]: Started sshd@142-128.140.113.241:22-139.178.89.65:44574.service - OpenSSH per-connection server daemon (139.178.89.65:44574). Jan 30 05:45:17.686284 sshd[8826]: Invalid user ftpuser from 178.128.149.80 port 33310 Jan 30 05:45:17.778883 sshd[8826]: Received disconnect from 178.128.149.80 port 33310:11: Bye Bye [preauth] Jan 30 05:45:17.778883 sshd[8826]: Disconnected from invalid user ftpuser 178.128.149.80 port 33310 [preauth] Jan 30 05:45:17.784757 systemd[1]: sshd@141-128.140.113.241:22-178.128.149.80:33310.service: Deactivated successfully. Jan 30 05:45:18.311874 sshd[8829]: Accepted publickey for core from 139.178.89.65 port 44574 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:45:18.317218 sshd[8829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:45:18.328110 systemd-logind[1473]: New session 115 of user core. Jan 30 05:45:18.333248 systemd[1]: Started session-115.scope - Session 115 of User core. Jan 30 05:45:19.149330 sshd[8829]: pam_unix(sshd:session): session closed for user core Jan 30 05:45:19.157088 systemd[1]: sshd@142-128.140.113.241:22-139.178.89.65:44574.service: Deactivated successfully. Jan 30 05:45:19.162818 systemd[1]: session-115.scope: Deactivated successfully. Jan 30 05:45:19.168557 systemd-logind[1473]: Session 115 logged out. Waiting for processes to exit. Jan 30 05:45:19.172209 systemd-logind[1473]: Removed session 115. Jan 30 05:45:22.462389 systemd[1]: Started sshd@143-128.140.113.241:22-186.10.125.209:16500.service - OpenSSH per-connection server daemon (186.10.125.209:16500). Jan 30 05:45:23.753802 sshd[8864]: Invalid user git from 186.10.125.209 port 16500 Jan 30 05:45:23.998790 sshd[8864]: Received disconnect from 186.10.125.209 port 16500:11: Bye Bye [preauth] Jan 30 05:45:23.998790 sshd[8864]: Disconnected from invalid user git 186.10.125.209 port 16500 [preauth] Jan 30 05:45:24.002433 systemd[1]: sshd@143-128.140.113.241:22-186.10.125.209:16500.service: Deactivated successfully. Jan 30 05:45:24.327461 systemd[1]: Started sshd@144-128.140.113.241:22-139.178.89.65:58944.service - OpenSSH per-connection server daemon (139.178.89.65:58944). Jan 30 05:45:25.318637 sshd[8871]: Accepted publickey for core from 139.178.89.65 port 58944 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:45:25.322118 sshd[8871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:45:25.333048 systemd-logind[1473]: New session 116 of user core. Jan 30 05:45:25.338158 systemd[1]: Started session-116.scope - Session 116 of User core. Jan 30 05:45:26.115214 sshd[8871]: pam_unix(sshd:session): session closed for user core Jan 30 05:45:26.124390 systemd[1]: sshd@144-128.140.113.241:22-139.178.89.65:58944.service: Deactivated successfully. Jan 30 05:45:26.130669 systemd[1]: session-116.scope: Deactivated successfully. Jan 30 05:45:26.132771 systemd-logind[1473]: Session 116 logged out. Waiting for processes to exit. Jan 30 05:45:26.135201 systemd-logind[1473]: Removed session 116. Jan 30 05:45:31.301708 systemd[1]: Started sshd@145-128.140.113.241:22-139.178.89.65:47616.service - OpenSSH per-connection server daemon (139.178.89.65:47616). Jan 30 05:45:32.307954 sshd[8886]: Accepted publickey for core from 139.178.89.65 port 47616 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:45:32.310874 sshd[8886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:45:32.320165 systemd-logind[1473]: New session 117 of user core. Jan 30 05:45:32.330193 systemd[1]: Started session-117.scope - Session 117 of User core. Jan 30 05:45:33.120519 sshd[8886]: pam_unix(sshd:session): session closed for user core Jan 30 05:45:33.127127 systemd[1]: sshd@145-128.140.113.241:22-139.178.89.65:47616.service: Deactivated successfully. Jan 30 05:45:33.132633 systemd[1]: session-117.scope: Deactivated successfully. Jan 30 05:45:33.137108 systemd-logind[1473]: Session 117 logged out. Waiting for processes to exit. Jan 30 05:45:33.139242 systemd-logind[1473]: Removed session 117. Jan 30 05:45:38.305390 systemd[1]: Started sshd@146-128.140.113.241:22-139.178.89.65:47624.service - OpenSSH per-connection server daemon (139.178.89.65:47624). Jan 30 05:45:39.349928 sshd[8921]: Accepted publickey for core from 139.178.89.65 port 47624 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:45:39.353447 sshd[8921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:45:39.359862 systemd-logind[1473]: New session 118 of user core. Jan 30 05:45:39.368436 systemd[1]: Started session-118.scope - Session 118 of User core. Jan 30 05:45:40.449679 sshd[8921]: pam_unix(sshd:session): session closed for user core Jan 30 05:45:40.460789 systemd[1]: sshd@146-128.140.113.241:22-139.178.89.65:47624.service: Deactivated successfully. Jan 30 05:45:40.467706 systemd[1]: session-118.scope: Deactivated successfully. Jan 30 05:45:40.473059 systemd-logind[1473]: Session 118 logged out. Waiting for processes to exit. Jan 30 05:45:40.475593 systemd-logind[1473]: Removed session 118. Jan 30 05:45:45.623410 systemd[1]: Started sshd@147-128.140.113.241:22-139.178.89.65:46156.service - OpenSSH per-connection server daemon (139.178.89.65:46156). Jan 30 05:45:46.637503 sshd[8959]: Accepted publickey for core from 139.178.89.65 port 46156 ssh2: RSA SHA256:DVeG4daC7PfEmReeDrGYtwvIXvDLlPsKV7VrsCfZ+AA Jan 30 05:45:46.641649 sshd[8959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:45:46.653657 systemd-logind[1473]: New session 119 of user core. Jan 30 05:45:46.663252 systemd[1]: Started session-119.scope - Session 119 of User core. Jan 30 05:45:47.460556 sshd[8959]: pam_unix(sshd:session): session closed for user core Jan 30 05:45:47.466934 systemd[1]: sshd@147-128.140.113.241:22-139.178.89.65:46156.service: Deactivated successfully. Jan 30 05:45:47.472423 systemd[1]: session-119.scope: Deactivated successfully. Jan 30 05:45:47.478967 systemd-logind[1473]: Session 119 logged out. Waiting for processes to exit. Jan 30 05:45:47.481343 systemd-logind[1473]: Removed session 119. Jan 30 05:46:00.396708 systemd[1]: Started sshd@148-128.140.113.241:22-176.10.207.140:34296.service - OpenSSH per-connection server daemon (176.10.207.140:34296). Jan 30 05:46:00.611473 sshd[8976]: Invalid user cone from 176.10.207.140 port 34296 Jan 30 05:46:00.641859 sshd[8976]: Received disconnect from 176.10.207.140 port 34296:11: Bye Bye [preauth] Jan 30 05:46:00.641859 sshd[8976]: Disconnected from invalid user cone 176.10.207.140 port 34296 [preauth] Jan 30 05:46:00.647435 systemd[1]: sshd@148-128.140.113.241:22-176.10.207.140:34296.service: Deactivated successfully. Jan 30 05:46:20.303470 systemd[1]: cri-containerd-84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f.scope: Deactivated successfully. Jan 30 05:46:20.305373 systemd[1]: cri-containerd-84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f.scope: Consumed 13.357s CPU time. Jan 30 05:46:20.458598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f-rootfs.mount: Deactivated successfully. Jan 30 05:46:20.482444 containerd[1502]: time="2025-01-30T05:46:20.452639103Z" level=info msg="shim disconnected" id=84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f namespace=k8s.io Jan 30 05:46:20.496994 containerd[1502]: time="2025-01-30T05:46:20.496848133Z" level=warning msg="cleaning up after shim disconnected" id=84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f namespace=k8s.io Jan 30 05:46:20.496994 containerd[1502]: time="2025-01-30T05:46:20.496980170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:46:20.581549 kubelet[2720]: E0130 05:46:20.581085 2720 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45144->10.0.0.2:2379: read: connection timed out" Jan 30 05:46:20.823525 kubelet[2720]: I0130 05:46:20.823445 2720 scope.go:117] "RemoveContainer" containerID="84caf3220d01ba7f1444bb015ed085a05d5c4fb733cf8715905fc1211213b80f" Jan 30 05:46:20.968659 containerd[1502]: time="2025-01-30T05:46:20.968169715Z" level=info msg="CreateContainer within sandbox \"d671161a80c7cd14e58ee77cf3621405bcf291d142ac1b1c0ab91d7a028ad457\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 30 05:46:21.098512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341676767.mount: Deactivated successfully. Jan 30 05:46:21.133034 containerd[1502]: time="2025-01-30T05:46:21.132841400Z" level=info msg="CreateContainer within sandbox \"d671161a80c7cd14e58ee77cf3621405bcf291d142ac1b1c0ab91d7a028ad457\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"10608d3a731b8204ac48b8dacecf7c413d8b9540ad753558d56460a835ca3275\"" Jan 30 05:46:21.140697 containerd[1502]: time="2025-01-30T05:46:21.140522356Z" level=info msg="StartContainer for \"10608d3a731b8204ac48b8dacecf7c413d8b9540ad753558d56460a835ca3275\"" Jan 30 05:46:21.206105 systemd[1]: Started cri-containerd-10608d3a731b8204ac48b8dacecf7c413d8b9540ad753558d56460a835ca3275.scope - libcontainer container 10608d3a731b8204ac48b8dacecf7c413d8b9540ad753558d56460a835ca3275. Jan 30 05:46:21.273577 containerd[1502]: time="2025-01-30T05:46:21.272552634Z" level=info msg="StartContainer for \"10608d3a731b8204ac48b8dacecf7c413d8b9540ad753558d56460a835ca3275\" returns successfully" Jan 30 05:46:21.603478 systemd[1]: cri-containerd-f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c.scope: Deactivated successfully. Jan 30 05:46:21.604706 systemd[1]: cri-containerd-f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c.scope: Consumed 21.744s CPU time, 19.0M memory peak, 0B memory swap peak. Jan 30 05:46:21.641639 containerd[1502]: time="2025-01-30T05:46:21.641360182Z" level=info msg="shim disconnected" id=f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c namespace=k8s.io Jan 30 05:46:21.641639 containerd[1502]: time="2025-01-30T05:46:21.641422879Z" level=warning msg="cleaning up after shim disconnected" id=f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c namespace=k8s.io Jan 30 05:46:21.641639 containerd[1502]: time="2025-01-30T05:46:21.641435673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:46:21.649500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c-rootfs.mount: Deactivated successfully. Jan 30 05:46:21.798920 kubelet[2720]: I0130 05:46:21.798806 2720 scope.go:117] "RemoveContainer" containerID="f98d07d69040fc54212b628e00f99b3491407b8c72fb8b5587c19711dd61750c" Jan 30 05:46:21.804409 containerd[1502]: time="2025-01-30T05:46:21.803426687Z" level=info msg="CreateContainer within sandbox \"1226443511815d02c43912896bc8ae72b0cdb7794096f672327ae18b6696613c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 05:46:21.852381 containerd[1502]: time="2025-01-30T05:46:21.851522051Z" level=info msg="CreateContainer within sandbox \"1226443511815d02c43912896bc8ae72b0cdb7794096f672327ae18b6696613c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"afd9934596cb6ecb27945374ba06da0b8dc0104b0e35de98540bab65e574945b\"" Jan 30 05:46:21.852381 containerd[1502]: time="2025-01-30T05:46:21.852180362Z" level=info msg="StartContainer for \"afd9934596cb6ecb27945374ba06da0b8dc0104b0e35de98540bab65e574945b\"" Jan 30 05:46:21.852057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1240083042.mount: Deactivated successfully. Jan 30 05:46:21.900556 systemd[1]: Started cri-containerd-afd9934596cb6ecb27945374ba06da0b8dc0104b0e35de98540bab65e574945b.scope - libcontainer container afd9934596cb6ecb27945374ba06da0b8dc0104b0e35de98540bab65e574945b. Jan 30 05:46:21.962820 containerd[1502]: time="2025-01-30T05:46:21.962775600Z" level=info msg="StartContainer for \"afd9934596cb6ecb27945374ba06da0b8dc0104b0e35de98540bab65e574945b\" returns successfully" Jan 30 05:46:22.389916 kubelet[2720]: E0130 05:46:22.379228 2720 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44962->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-0-d-6ba27b8de2.181f6238052b6744 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-0-d-6ba27b8de2,UID:34312bfdc02fc2fca0b95dca7c16cd92,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-d-6ba27b8de2,},FirstTimestamp:2025-01-30 05:46:13.932721988 +0000 UTC m=+1071.071572995,LastTimestamp:2025-01-30 05:46:13.932721988 +0000 UTC m=+1071.071572995,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-d-6ba27b8de2,}" Jan 30 05:46:23.226546 systemd[1]: Started sshd@149-128.140.113.241:22-178.128.149.80:59588.service - OpenSSH per-connection server daemon (178.128.149.80:59588). Jan 30 05:46:23.795271 sshd[9166]: Invalid user steam from 178.128.149.80 port 59588 Jan 30 05:46:23.889113 sshd[9166]: Received disconnect from 178.128.149.80 port 59588:11: Bye Bye [preauth] Jan 30 05:46:23.890057 sshd[9166]: Disconnected from invalid user steam 178.128.149.80 port 59588 [preauth] Jan 30 05:46:23.895684 systemd[1]: sshd@149-128.140.113.241:22-178.128.149.80:59588.service: Deactivated successfully. Jan 30 05:46:26.350261 systemd[1]: cri-containerd-69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8.scope: Deactivated successfully. Jan 30 05:46:26.350852 systemd[1]: cri-containerd-69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8.scope: Consumed 12.507s CPU time, 19.0M memory peak, 0B memory swap peak. Jan 30 05:46:26.400342 containerd[1502]: time="2025-01-30T05:46:26.399987166Z" level=info msg="shim disconnected" id=69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8 namespace=k8s.io Jan 30 05:46:26.400342 containerd[1502]: time="2025-01-30T05:46:26.400065432Z" level=warning msg="cleaning up after shim disconnected" id=69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8 namespace=k8s.io Jan 30 05:46:26.400342 containerd[1502]: time="2025-01-30T05:46:26.400082284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:46:26.410101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8-rootfs.mount: Deactivated successfully. Jan 30 05:46:26.824986 kubelet[2720]: I0130 05:46:26.824072 2720 scope.go:117] "RemoveContainer" containerID="69016097ef0d1ad289f78de7fadc49566ae13a285842b4e7c5e9e682eaaf93f8" Jan 30 05:46:26.829248 containerd[1502]: time="2025-01-30T05:46:26.829173594Z" level=info msg="CreateContainer within sandbox \"1e36176438ed0a2ef4c9c9051204537d8568373e41788469830597bdd41e1945\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 05:46:26.882937 containerd[1502]: time="2025-01-30T05:46:26.882760375Z" level=info msg="CreateContainer within sandbox \"1e36176438ed0a2ef4c9c9051204537d8568373e41788469830597bdd41e1945\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a83f94b5060e222577904fc0819a3213029ac64b463c5ff1279791cf81dc0931\"" Jan 30 05:46:26.883766 containerd[1502]: time="2025-01-30T05:46:26.883692089Z" level=info msg="StartContainer for \"a83f94b5060e222577904fc0819a3213029ac64b463c5ff1279791cf81dc0931\"" Jan 30 05:46:26.955138 systemd[1]: Started cri-containerd-a83f94b5060e222577904fc0819a3213029ac64b463c5ff1279791cf81dc0931.scope - libcontainer container a83f94b5060e222577904fc0819a3213029ac64b463c5ff1279791cf81dc0931. Jan 30 05:46:27.026999 containerd[1502]: time="2025-01-30T05:46:27.026347385Z" level=info msg="StartContainer for \"a83f94b5060e222577904fc0819a3213029ac64b463c5ff1279791cf81dc0931\" returns successfully"